CN118368464A - Video interaction method, device, electronic device and storage medium - Google Patents
Video interaction method, device, electronic device and storage medium Download PDFInfo
- Publication number
- CN118368464A CN118368464A CN202310096276.3A CN202310096276A CN118368464A CN 118368464 A CN118368464 A CN 118368464A CN 202310096276 A CN202310096276 A CN 202310096276A CN 118368464 A CN118368464 A CN 118368464A
- Authority
- CN
- China
- Prior art keywords
- video
- target
- editing
- content
- discussion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The application relates to the technical field of Internet, in particular to a video interaction method, a video interaction device, electronic equipment and a storage medium, which are used for optimizing the combination of video and object interaction. The method comprises the following steps: responding to the intercepting operation of the target video in the video playing interface, and displaying the intercepted target content in the video playing interface; responding to the editing operation of the target content, and presenting the editing effect of the target content; the editing effect is obtained by editing at least one of picture information and audio information related to at least two video objects to be edited in the target content according to a specified editing function, and the editing process is a process aiming at the interaction state of the at least two video objects to be edited, so that the interaction state of the video objects to be edited after editing accords with an expected interaction scene; and in response to the sharing operation of the target content, sharing the target content with the editing effect. The application can optimize the combination of video and object interaction.
Description
Technical Field
The present application relates to the field of internet technologies, and in particular, to a video interaction method, a video interaction device, an electronic device, and a storage medium.
Background
With the development of video technology, people can watch various long videos such as movies, televisions, various goods and television through video application, and can capture video pictures and release the video pictures to a disclosed interactive interface. The objects can browse the social dynamic content in the interaction interface, and interact (like praise and comment) with the social dynamic content.
However, the socially relevant functionality of the relevant platform is generalized towards the whole viewing population, functional design. Taking a certain video software as an example, a related schematic diagram is shown in fig. 1A, an object can click a screenshot in the process of watching a video, screenshot is performed on a video picture, and then the intercepted picture is released into an interactive interface. As also shown in FIG. 1B, a textual description may be added to the screenshot before release to the disclosed interactive interface.
For some viewing groups with specific interaction requirements (such as discussing some objects in the video based on social dynamic content), a good interaction experience cannot be obtained while watching the video on a relevant long video platform, and only after the video platform finishes watching the video, other platforms can be interacted, so that the specific requirements of the part of viewing groups cannot be met.
Therefore, how to optimize the combination of video and object interaction is a technical problem to be solved.
Disclosure of Invention
The embodiment of the application provides a video interaction method, a video interaction device, electronic equipment and a storage medium, which are used for optimizing the combination of video and object interaction and enriching video interactivity.
The video interaction method in the embodiment of the application comprises the following steps:
Responding to intercepting operation of target video in a video playing interface, and displaying intercepted target content in the video playing interface;
Responding to the editing operation of the target content, and presenting the editing effect of the target content; the editing effect is obtained by carrying out editing processing of corresponding effects on at least one of picture information and audio information related to at least two video objects to be edited in the target content according to a specified editing function, and the editing processing is processing of interaction states of the at least two video objects to be edited so that the interaction states of the at least two video objects to be edited after editing accord with expected interaction scenes;
and responding to the sharing operation of the target content, and sharing the target content with the editing effect.
The video interaction method provided by the embodiment of the application comprises the following steps:
After receiving an editing request aiming at target content, returning at least one editing function to the client; the target content is obtained by intercepting target video in a video playing interface through the client;
acquiring an editing effect on the target content and storing release information for the target content with the editing effect; the editing effect is obtained by editing at least one of picture information and audio information related to at least two video objects to be edited in the target content according to a specified editing function, wherein the editing process is a process for the interactive state of the at least two video objects to be edited, so that the interactive state of the at least two video objects to be edited after editing accords with an expected interactive scene.
The embodiment of the application provides a video interaction device, which comprises:
the intercepting unit is used for responding to intercepting operation of a target video in a video playing interface and displaying intercepted target content in the video playing interface;
An editing unit configured to present an editing effect on the target content in response to an editing operation on the target content; the editing effect is obtained by carrying out editing processing of corresponding effects on at least one of picture information and audio information related to at least two video objects to be edited in the target content according to a specified editing function, and the editing processing is processing of interaction states of the at least two video objects to be edited so that the interaction states of the at least two video objects to be edited after editing accord with expected interaction scenes;
and the sharing unit is used for responding to the sharing operation of the target content and sharing the target content with the editing effect.
Optionally, the sharing unit is specifically configured to:
Responding to the discussion group selection and sharing operation of the target content, publishing the target content with the editing effect to an interactive interface of the target video in the form of social dynamic content and to a discussion area corresponding to the selected appointed discussion group; each discussion group contains at least two video objects associated with the target video.
Optionally, the sharing unit is specifically configured to:
in the discussion area, the target content and other content related to the target content are clustered and displayed, and viewpoint information corresponding to each content is sequentially presented in a viewpoint area; the other content is other social dynamic content with similarity exceeding a preset threshold value, and the display sequence of the viewpoint information corresponding to each content is associated with the display sequence of the target content and the other content.
Optionally, the sharing unit is specifically configured to:
responding to the sharing operation of the target content, and sharing the target content with the editing effect to a third-party social platform;
The apparatus further comprises:
and the jump unit is used for responding to the triggering operation of viewing the target video corresponding to the target content through the third-party social platform, jumping to the video playing interface, and restoring to an original video picture by the corresponding editing effect when continuing to play the target content in the video playing interface.
Optionally, the editing unit is specifically configured to:
presenting at least one editing function in response to an editing operation on the target content;
Presenting at least one effect control corresponding to one designated editing function in response to a selection operation of the one designated editing function;
And after each response to the editing operation triggered based on one appointed effect control in the at least one effect control, executing editing processing on the corresponding part in the target content based on the appointed effect control, presenting a corresponding editing effect.
Optionally, the target content is a screenshot, and the specified editing function is an editing function aiming at a specified part of the video object; the editing unit is specifically configured to:
identifying the at least two video objects to be edited in the target content in response to an editing operation triggered by a finishing effect control in the at least one effect control, and moving at least one of the at least two video objects to be edited according to a relevant part based on the finishing effect control to present the editing effect of the finishing edition; or alternatively
And identifying the at least two video objects to be edited in the target content in response to the editing operation triggered by the smiling effect control in the at least one effect control, connecting relevant parts in the at least two video objects to be edited based on the smiling effect control, and presenting the editing effect of the smiling edition.
Optionally, the apparatus further includes:
And the first response unit is used for responding to the object interaction operation aiming at the target video and highlighting at least one key video fragment in the playing progress bar of the target video.
Optionally, the target content is a screenshot, and the specified editing function is an editing function aiming at a specified part of the video object;
If the first response unit responds to the object interaction operation before the editing unit responds to the editing operation on the target content, the editing unit is specifically configured to:
Responding to the editing operation of the target content, identifying the at least two video objects to be edited in the target content, moving at least one of the at least two video objects to be edited according to the relevant parts, and presenting the editing effect of the fine modification; or alternatively
And in response to the editing operation of the target content, identifying the at least two video objects to be edited in the target content, connecting relevant parts in the at least two video objects to be edited, and presenting the editing effect of the smiling edition.
Optionally, the editing unit is specifically configured to:
and adjusting the distance between target areas in the relevant parts of each video object to be edited to a target distance range by moving at least one video object to be edited, and displaying the editing effect of the fine-repair edition.
Optionally, the editing unit is specifically configured to:
and twisting the target area in the relevant part of at least one video object to be edited so as to be connected with the relevant parts of other video objects to be edited, and displaying the editing effect of the laughter edition.
Optionally, the apparatus further includes:
The first prompting unit is used for presenting an object interaction control in the video playing interface;
And presenting corresponding first prompt information in the video playing interface, wherein the first prompt information is used for guiding a target object to execute the object interaction operation so as to view the key video fragments related to each discussion group in the target video.
Optionally, the apparatus further includes:
The second prompt unit is used for presenting second prompt information at the relevant position of the target key video snippet in the playing progress bar, the second prompt information is used for guiding a target object to view a discussion area corresponding to the target key video snippet, and the target key video snippet is selected from the at least one key video snippet based on the current playing time point of the target video and the playing time of the at least one key video snippet.
Optionally, the second prompt information further includes a jump control; the second prompting unit is further used for:
Responding to the triggering operation of the jump control, presenting an interactive interface of the target video, and displaying a discussion area corresponding to the target key video fragment in the interactive interface; the discussion area corresponding to the target key video snippet is used for: and displaying the social dynamic content corresponding to the target discussion group associated with the target key video snippet.
Optionally, the apparatus further includes:
The second response unit is used for highlighting a discussion control for jumping to the interactive interface in the video playing interface;
Responding to the triggering operation of the discussion control, presenting an interactive interface of the target video, and displaying a discussion area corresponding to the target key video fragment in the interactive interface; the discussion area corresponding to the target key video snippet is used for: and displaying the social dynamic content corresponding to the target discussion group associated with the target key video snippet.
Optionally, each key video snippet is associated with at least one discussion group; the apparatus further comprises:
A determining unit, configured to determine the target discussion group by:
if the target key video snippet is associated with only one discussion group, using the one discussion group as the target discussion group;
If the target key video snippet is associated with a plurality of discussion groups, selecting one from the plurality of discussion groups as the target discussion group based on the interaction quantity of the social dynamic content corresponding to each discussion group.
Optionally, the apparatus further includes:
The creating unit is used for responding to the discussion area creating operation triggered by the interactive interface and presenting a plurality of video objects related to the target video;
And responding to the selection of at least two target video objects in the plurality of video objects and the discussion group naming operation, and displaying a newly added discussion area corresponding to a discussion group formed by the at least two target video objects in the interactive interface.
Optionally, the creating unit is further configured to:
before a new discussion zone corresponding to a discussion group formed by the at least two target video objects is displayed in the interactive interface, if the fact that the discussion group corresponding to the selected at least two target video objects exists is determined, a corresponding creation failure prompt is displayed in the video playing interface.
Optionally, the interactive interface includes labels corresponding to each discussion group; the apparatus further comprises:
The switching unit is used for responding to the label selection operation triggered on the interactive interface and switching to a discussion area of a discussion group corresponding to the selected target label;
Wherein each discussion zone includes at least one of the following content presentation modes:
displaying all social dynamic contents which are related to the discussion zone and aim at the target video according to the release time of the social dynamic contents;
displaying all social dynamic contents which are related to the discussion zone and aim at the target video according to the interaction quantity of the social dynamic contents;
And displaying the social dynamic content which is related to the discussion area and is released in a set time length before and after the current playing time point of the target video according to the release time or the interaction quantity of the social dynamic content.
Optionally, the apparatus further includes:
A third response unit, configured to determine that after the total number of social dynamic content related to a current discussion group reaches a specified number, unlock an additional multimedia resource related to the current discussion group; the current discussion group is a discussion group corresponding to a discussion area currently displayed by the discussion interface;
And playing the additional multimedia resources in response to a viewing operation of the additional multimedia resources.
Optionally, the third response unit is further configured to:
Displaying a quantity progress bar corresponding to the total quantity of the social dynamic content related to the current discussion group in the interactive interface, wherein the total length of the quantity progress bar is determined based on the designated quantity;
The third response unit is specifically configured to:
When the total quantity of the social dynamic contents related to the current discussion group reaches a specified quantity, displaying resource links related to the additional multimedia resources in the quantity progress bar;
And responding to the triggering operation of the resource link, and jumping to the resource display interface related to the additional multimedia resource for playing.
Optionally, the third response unit is further configured to:
After the resource links related to the additional multimedia resources are displayed in the quantity progress bar, hiding the quantity progress bar and the resource links, and presenting corresponding view controls on the interactive interface;
And responding to the triggering operation of the view control, and jumping to the resource display interface related to the additional multimedia resource for playing.
Another video interaction device provided by the embodiment of the present application includes:
the feedback unit is used for returning at least one editing function to the client after receiving the editing request aiming at the target content; the target content is obtained by intercepting target video in a video playing interface through the client;
The storage unit is used for acquiring the editing effect of the target content and storing release information of the target content with the editing effect; the editing effect is obtained by editing at least one of picture information and audio information related to at least two video objects to be edited in the target content according to a specified editing function, wherein the editing process is a process for the interactive state of the at least two video objects to be edited, so that the interactive state of the at least two video objects to be edited after editing accords with an expected interactive scene.
Optionally, the feedback unit is further configured to:
Receiving a discussion group selection request aiming at the target content, and returning each discussion group information associated with the target video to the client, wherein each discussion group comprises at least two video objects related to the target video;
Receiving a request for publishing the target content to a designated discussion group, returning discussion area information associated with the designated discussion group to the client, so that the client publishes the target content with the editing effect to an interactive interface of the target video in the form of social dynamic content, and publishing the target content to a discussion area corresponding to the selected designated discussion group.
Optionally, the feedback unit is further configured to:
receiving an object interaction request for the target video, the object interaction request being sent by the client in response to an object interaction operation for the target video;
And acquiring detail information of at least one key video segment contained in the target video, and returning the acquired detail information to the client so that the client highlights the at least one key video segment in a playing progress bar of the target video according to the detail information, wherein each key video segment corresponds to at least one group of video objects.
Optionally, the feedback unit is further configured to:
Receiving a sorting request sent by a client, wherein the sorting request comprises a content display mode corresponding to a current discussion area; the current discussion area is the currently displayed discussion area of the discussion interface;
According to the content display mode contained in the ordering request, ordering the social dynamic content related to the current discussion area;
And returning the sequencing result to the client so that the client displays relevant social dynamic content in the current discussion area in the interactive interface based on the sequencing result.
Optionally, the feedback unit is further configured to:
Receiving and storing the interactive data aiming at the current discussion group, wherein the interactive data is sent by the client; the current discussion group is a discussion group corresponding to a discussion area currently displayed by the discussion interface;
After the number of the social dynamic contents related to the current discussion area reaches the designated number, unlocking the additional multimedia resources related to the current discussion group, and feeding back corresponding resource links to the client so that the client plays the additional multimedia resources based on the resource links.
An electronic device in an embodiment of the present application includes a processor and a memory, where the memory stores a computer program, and when the computer program is executed by the processor, causes the processor to execute any one of the steps of the video interaction method described above.
An embodiment of the present application provides a computer readable storage medium including a computer program for causing an electronic device to execute the steps of any one of the video interaction methods described above when the computer program is run on the electronic device.
Embodiments of the present application provide a computer program product comprising a computer program stored in a computer readable storage medium; when the processor of the electronic device reads the computer program from the computer readable storage medium, the processor executes the computer program, so that the electronic device performs the steps of any one of the video interaction methods described above.
The application has the following beneficial effects:
The embodiment of the application provides a video interaction method, a video interaction device, electronic equipment and a storage medium. In the application, the target content cut off at the video playing interface of the client can be subjected to secondary creation by adopting a specified editing function, for example, the picture information and/or the audio information contained in the target content can be subjected to specific effect processing, and the process can be directly subjected to secondary creation in the process of watching the video by a target object without a third party creation platform, so that the operation is simple and convenient; when watching a video, if the target object wants to develop interaction for the video object in the target object, the target content can be intercepted according to the self requirement, the intercepted target content is edited based on a specified editing function, the editing is targeted, and the interaction state of at least two video objects to be edited in the target content is mainly adjusted so as to enable the video objects to conform to an expected interaction scene, and the development interaction between the follow-up objects is facilitated for the video objects to be edited, the expected interaction scene and the like; based on the method, after the target content is created for the second time, the target content with a certain editing effect can be shared, so that the interactive content among objects is enriched, the content quality is improved, and the combination of video and object interaction is optimized.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1A is a diagram illustrating a screenshot of the related art;
FIG. 1B is a schematic diagram of a post-posting process in the related art;
fig. 2 is a schematic diagram of an application scenario in an embodiment of the present application;
FIG. 3 is a flowchart illustrating a video interaction method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a video playing interface according to an embodiment of the present application;
FIG. 5A is a schematic diagram of an interactive interface according to an embodiment of the present application;
FIG. 5B is a schematic illustration of a discussion area post cluster in an embodiment of the application;
FIG. 5C is a schematic diagram of a third party social platform according to an embodiment of the present application;
Fig. 5D is a schematic diagram of a video playing process according to an embodiment of the present application;
FIG. 6 is a schematic illustration showing highlighting of a named scene segment according to an embodiment of the application;
FIG. 7 is a schematic diagram of a first prompt message according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a second prompt message according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a jump control in an embodiment of the application;
FIG. 10 is a schematic illustration of a two-way interface in accordance with an embodiment of the present application;
FIG. 11A is a diagram of a font-effects page in accordance with an embodiment of the present application;
FIG. 11B is a diagram of a color effects page according to an embodiment of the present application;
FIG. 11C is a diagram of a description effect page according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a magnification effect page according to an embodiment of the present application;
FIG. 13A is a schematic diagram of a mirror effect in an embodiment of the application;
FIG. 13B is a schematic diagram of a fish-eye effect according to an embodiment of the application;
FIG. 14A is a schematic diagram of a finishing press head effect in an embodiment of the application;
FIG. 14B is a schematic diagram of a joking plate click effect in an embodiment of the application;
FIG. 14C is a logic diagram of an implementation of a push button function in an embodiment of the present application;
FIG. 15 is a schematic diagram showing the corresponding effects of a smiling version of work function in an embodiment of the present application;
FIG. 16A is a diagram of a clip effect page according to an embodiment of the present application;
FIG. 16B is a diagram of a rotation effect page according to an embodiment of the present application;
Fig. 16C is a schematic diagram of a puzzle effect page according to an embodiment of the present application;
fig. 16D is a schematic diagram of another puzzle effect page according to an embodiment of the application;
FIG. 17A is a diagram of a filter effect page according to an embodiment of the present application;
FIG. 17B is a schematic diagram of a sticker effect page in an embodiment of the present application;
FIG. 17C is a diagram illustrating another embodiment of a decal effect page;
FIG. 18A is a schematic diagram showing the effect of overlapping a specified editing function with other editing functions in an embodiment of the present application;
FIG. 18B is a schematic diagram showing the effects of overlapping use of another specified editing function with other editing functions in an embodiment of the present application;
fig. 19 is a schematic diagram of a process of uploading pictures in an embodiment of the application;
FIG. 20A is a schematic diagram of an interactive interface according to an embodiment of the present application;
FIG. 20B is a schematic diagram of another interactive interface according to an embodiment of the present application;
FIG. 21 is a schematic diagram of an add discussion group page in an embodiment of the application;
FIG. 22 is a schematic diagram illustrating selection of video objects and naming of discussion groups in accordance with an embodiment of the present application;
FIG. 23A is a schematic diagram of a success cue creation in an embodiment of the application;
FIG. 23B is a schematic diagram of a create failure hint in an embodiment of the present application;
FIG. 24 is a diagram of a quantity progress bar in accordance with an embodiment of the present application;
FIG. 25 is a diagram of another quantity progress bar in accordance with an embodiment of the present application;
FIG. 26 is a schematic diagram of a view control in an embodiment of the application;
FIG. 27 is a diagram of a resource display interface according to an embodiment of the present application;
FIG. 28 is a flowchart illustrating another video interaction method according to an embodiment of the present application;
Fig. 29A is an overall flowchart overview of a video interaction method according to an embodiment of the present application;
FIG. 29B is a general flow chart overview of another video interaction method according to an embodiment of the present application;
FIG. 30A is a timing diagram illustrating a video interaction method according to an embodiment of the present application;
FIG. 30B is a timing diagram illustrating another video interaction method according to an embodiment of the present application;
FIG. 31 is a schematic diagram illustrating a structure of a video interaction device according to an embodiment of the present application;
FIG. 32 is a schematic diagram of a video interaction device according to another embodiment of the present application;
FIG. 33 is a schematic diagram showing a hardware configuration of an electronic device to which the embodiment of the present application is applied;
Fig. 34 is a schematic diagram of a hardware configuration of an electronic device to which the embodiment of the present application is applied.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the technical solutions of the present application, but not all embodiments. All other embodiments, based on the embodiments described in the present document, which can be obtained by a person skilled in the art without any creative effort, are within the scope of protection of the technical solutions of the present application.
Some of the concepts involved in the embodiments of the present application are described below.
Object interaction operation: an operation that a target object wants to expand interaction based on a video object related to a target video, such as expanding discussion on related video objects, browsing content related to the video object, etc., which belong to an interaction operation expanded based on the video object, is not limited to interaction of the target object with other objects or interaction of the target object itself with a video platform or other platforms, the operation may be triggered when the target object views the target video, and after the target object triggers the operation, one or more key video segments may be highlighted in a play progress bar of the target video so that the object expands interaction based on the key video segments, such as cracking CP. Wherein CP is an abbreviation of english "couple", originally intended as couple, and now refers to the viewer's name given to his favorite screen lovers/partner/pairing, as long as two or more persons in contact can be called CP, regardless of sex. "cracking" means that the favorite CP is supported, and the main actions include expressing that the favorite CP is very strong, and the CP is avid every day.
Discussion group: the method refers to the combination of video objects related to the target video in the scene that the target object watches the target video, and the target object can participate in discussion and release related social dynamic content (such as posts). In the application, each discussion group comprises at least two video objects related to the target video, and taking two video objects as examples, each discussion group can be formed by every two video objects, and a topic label is corresponding to each discussion group for the target object to participate in the discussion.
Air bubble: i.e. a bubble pop-up window, which may be a pop-up window consisting of a rectangular and triangular arrow, may be used for hint guidance.
And (3) secondary creation: based on the existing works, the secondary creation is performed, which can be simply called as secondary creation. In the embodiment of the application, the process of editing the target content can be called secondary creation. In the application, the picture information (such as video picture) and the audio information (such as voice content) contained in the target content cut from the target video can be edited, such as adding stickers, filters, text insertion and replacement to the video picture, and such as replacing and dubbing the audio information.
Key video clip: the method refers to fragments corresponding to certain key contents in the target video, such as named scene fragments, fragments of certain video objects, fragments with high discussion heat, classical fragments and the like. The scene of name refers to a highlight high-energy fragment in the play. In the present application, the key video snippets facilitate the discussion interactions between the objects.
Social dynamic content: refers to content which can be browsed and interacted (like praise and comment) by the user and other objects after being released. In the embodiment of the application, the social dynamic content is realized based on the target content, namely, the target object can intercept the current target video in the video watching scene, and the social dynamic content can be released for interaction based on the target content after the target content is obtained and the target content is created.
Additional multimedia resources: is an additional content associated with the video forum (which may be abbreviated as forum), the discussion group, and in particular, in the form of multimedia, which may be video, audio, etc. The additional content is content related to the discussion group of the discussion area, such as the battle of a CP, which needs to be unlocked after the post number of the discussion area reaches a certain value.
And (3) human image segmentation: portrait segmentation refers to a process or method for identifying the outline of a person in an image, segmenting the person from the background, returning to a gray level image, a foreground portrait image and the like.
Dynamic sticker: dynamic stickers generally refer to the capability of stickers with rich special effects, which are realized based on technologies such as face recognition, key points and expression detection.
Image distortion: image warping generally refers to the effects of translation, rotation, scaling, affine transformation, perspective transformation, columnar transformation, etc. of a two-dimensional (2D) image.
Liquefaction transformation: the liquefaction transformation capability in picture editing is a special effect capability that supports pushing, pulling, rotating, squeezing or expanding image areas. Facial perception liquefaction is a derivative based on liquefaction transformation, and is mainly used for distinguishing and editing eyes, nose, mouth and other facial features.
The following briefly describes the design concept of the embodiment of the present application:
as social networking with video becomes a new requirement for video products in combination with social networks, the combination of video and discussion becomes an important means of improving the experience of objects.
With the development of video technology, people can watch various long videos such as movies and televisions through video websites, and can share the views of people in the video discussion area/third party social platforms through the video discussion area/third party social platform so as to discuss the people.
Related long video platforms support objects to capture video frames during the process of watching video, such as daily watched video (e.g. movies, episodes, short videos, variety, etc.), many classic segments, scenes, or popular segments such as dialogues often appear. People can also intercept the video clips or the images of the scenes, add text description after the screenshot, issue the video clips to a video discussion area (related schematic diagrams are shown in fig. 1A and 1B) or a third party social platform, browse, expand discussion, praise, comment and the like on the basis of the video clips or the images.
However, the social related functions of the related platform are oriented to all objects, so that the functional design is generalized, and the actual requirements of some object groups with specific interaction requirements cannot be met. For example, an object with a CP cracking requirement cannot obtain a good CP cracking experience while watching a video on a related long video platform, and can only crack CPs on other platforms after the video platform finishes watching the video.
In view of the above, the embodiments of the present application provide a video interaction method, apparatus, electronic device and storage medium. In the application, the target content cut off at the video playing interface of the client can be subjected to secondary creation by adopting a specified editing function, for example, the picture information and/or the audio information contained in the target content can be subjected to specific effect processing, and the process can be directly subjected to secondary creation in the process of watching the video by a target object without a third party creation platform, so that the operation is simple and convenient; when watching a video, if the target object wants to develop interaction for the video object in the target object, the target content can be intercepted according to the self requirement, the intercepted target content is edited based on a specified editing function, the editing is targeted, and the interaction state of at least two video objects to be edited in the target content is mainly adjusted so as to enable the video objects to conform to an expected interaction scene, and the development interaction between the follow-up objects is facilitated for the video objects to be edited, the expected interaction scene and the like; based on the method, after the target content is created for the second time, the target content with a certain editing effect can be shared, so that the interactive content among objects is enriched, the content quality is improved, and the combination of video and object interaction is optimized.
In addition, the application further provides that after the target content is authored for the second time, the target content with a certain editing effect can be published to the video discussion area in the form of social dynamic content, in the process, for convenience of discussion, the video object discussion group which is required to be bound with the target content is required to be selected before the target content is published, and then the target content is published to the video discussion area of the appointed discussion group directly, and the social dynamic content in the video discussion area is closely related to the appointed discussion group, so that discussion interaction between target objects is facilitated, and video interactivity is enriched while the interactive combination of video and objects is optimized.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are for illustration and explanation only, and not for limitation of the present application, and embodiments of the present application and features of the embodiments may be combined with each other without conflict.
Fig. 2 is a schematic diagram of an application scenario according to an embodiment of the present application. The application scenario diagram includes two terminal devices 210 and a server 220.
In the embodiment of the present application, the terminal device 210 includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a desktop computer, an electronic book reader, an intelligent voice interaction device, an intelligent home appliance, a vehicle-mounted terminal, and the like; the terminal device may be provided with a client related to video interaction, where the client may be software (such as a browser, video software, etc.), or may be a web page, an applet, etc., and the server 220 may be a server corresponding to the software or the web page, the applet, etc., or a server specifically used for performing video interaction, and the application is not limited in detail. The server 220 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligent platform.
It should be noted that, the video interaction method in the embodiments of the present application may be performed by an electronic device, which may be the terminal device 210 or the server 220, that is, the method may be performed by the terminal device 210 or the server 220 separately, or may be performed by both the terminal device 210 and the server 220 together. For example, when the terminal device 210 and the server 220 are executed together, a client terminal related to video interaction, such as video software, may be installed on the terminal device 210, the target object may view the target video through the video software, during the viewing process, an intercepting operation may be triggered, and the client terminal responds to the intercepting operation and displays the intercepted target content in the video playing interface; then, the target object may directly perform secondary authoring on the target content at the video software side, specifically, the client responds to the editing operation on the target content and sends a corresponding editing request to the server 220, after receiving the request, the server 220 may return corresponding editing functions to the client in a list or other form, where each editing function is an editing function for a picture or audio related to the video object in the target video; furthermore, the client displays the editing functions, and then according to the designated editing function selected by the target object, the server 220 can acquire and store the corresponding editing effect after performing editing processing of the corresponding effect on at least one of the picture information and the audio information related to at least two video objects to be edited in the target content; and then, the target object can share the target content after the secondary creation, such as publishing the target content to a video discussion area of the platform or sharing the target content to a third-party social platform.
Taking posting to a video forum as an example, the target content with the editing effect can be posted to the video forum in an interactive interface of the target video in the form of social dynamic content (such as posts), specifically, before posting, a discussion group selection request can be sent to the server 220 through a client, and then, based on each discussion group information returned by the server 220, after selecting a discussion group (which can be recorded as a designated discussion group) to which the target content is bound, a request for posting the target content to the designated discussion group is sent to the server 220, and then, the server 220 returns forum information associated with the designated discussion group to the client, and posts the target content with the editing effect to the interactive interface of the target video in the form of social dynamic content through the client, and to the forum corresponding to the selected designated discussion group.
Taking sharing to a third party social platform as an example, the target content with the editing effect can be shared to the third party social platform in the form of social dynamic content, such as sharing to a friend circle and a conversation, so as to attract other objects to watch the editing effect of the target content, and the target content is guided to the feature film of the target video for watching.
It will be appreciated that in the specific embodiments of the present application, related data such as user operations are involved, and when the above embodiments of the present application are applied to specific products or technologies, user permissions or consents need to be obtained, and the collection, use and processing of related data need to comply with related laws and regulations and standards of related countries and regions.
In an alternative embodiment, the terminal device 210 and the server 220 may communicate via a communication network.
In an alternative embodiment, the communication network is a wired network or a wireless network.
It should be noted that, the number of terminal devices and servers shown in fig. 2 is merely illustrative, and the number of terminal devices and servers is not limited in practice, and is not particularly limited in the embodiment of the present application.
In the embodiment of the application, when the number of the servers is multiple, the multiple servers can be formed into a blockchain, and the servers are nodes on the blockchain; according to the video interaction method disclosed by the embodiment of the application, related video data can be stored on a blockchain, such as editing function related information, discussion group information of videos, discussion area information, detail information of key video clips, interaction data of social dynamic content and the like.
In addition, the embodiment of the application can be applied to various scenes, including not only video interaction scenes, but also scenes such as cloud technology, artificial intelligence, intelligent traffic, auxiliary driving and the like.
The video interaction method provided by the exemplary embodiments of the present application will be described below with reference to the accompanying drawings in conjunction with the above-described application scenario, and it should be noted that the above-described application scenario is only shown for the convenience of understanding the spirit and principles of the present application, and embodiments of the present application are not limited in this respect.
Referring to fig. 3, a flowchart of an implementation of a video interaction method in an embodiment of the present application is shown, taking a client installed on a terminal device as an execution body as an example, and the specific implementation flow of the method is as follows:
S31: and the client responds to the intercepting operation of the target video in the video playing interface, and displays the intercepted target content in the video playing interface.
Wherein the intercepted target content may be a picture or a video clip. The target object can intercept a picture or intercept a video, and then the intercepted picture or video fragment is authored secondarily.
In the present application, the target object refers to an object using the client, such as a user watching a target video.
Fig. 4 is a schematic diagram of a video playing interface according to an embodiment of the application. A screenshot button is arranged in the video playing interface, as shown in S41, the target object can click the screenshot button shown in S41 to screenshot the picture, and a intercepted picture is obtained.
The video interface is also provided with a section button, as shown in S42, the target object can click the section button shown in S42 to intercept the video, click the button again after a period of time, stop intercepting, and obtain a video section.
It should be noted that, the above is taken as an example of capturing a picture or a video clip, when the target object wants to capture multiple pictures or multiple video clips, the capturing operation may be triggered multiple times by the button shown above, and besides the above-listed method of triggering the capturing operation by clicking the related button, other triggering methods are also applicable to the present application, such as gesture triggering, voice triggering, or double-click related button triggering, which are not limited herein specifically.
S32: the client side responds to the editing operation of the target content and presents the editing effect of the target content.
The editing effect is obtained by performing editing processing of corresponding effects on at least one of picture information and audio information related to at least two video objects to be edited in the target content according to a specified editing function, and the editing processing is processing of interaction states of the at least two video objects to be edited, so that the interaction states of the at least two video objects to be edited after editing conform to expected interaction scenes.
In the application, when editing the target content based on the specified editing function, the target content needs to be ensured to contain a plurality of video objects, wherein at least two video objects to be edited are video objects which can be preconfigured by a system (such as video objects which are bound with the target content in advance), can be determined by identifying the target content when the editing operation of the target content is detected, can be specified by the target object, and the like, and the method is not limited in detail.
In addition, the video object in the present application may be a person, an animal, a plant, or other objects, and is not specifically limited herein, and two video objects to be edited are taken as examples, for example, two persons may be taken as video objects to be edited, one person and one animal may be taken as video objects to be edited, or one person and one plant may be taken as video objects to be edited, two animals may be taken as video objects to be edited, one animal and one plant may be taken as video objects to be edited, and the like.
The interaction state between at least two video objects to be edited represents a relationship between the video objects to be edited in the target content, and specifically may be a behavioral relationship (such as intimate contact, bridging, gazing, indicating … …), a linguistic relationship (such as mutual quarrying, talk and laugh … …), and the like. The relation can be determined according to action behaviors, feelings, related audios and the like of at least one video object to be edited in the target content, or can be determined according to word senses, atmosphere and the like of background music, bystandings and the like in the target content.
Taking two video objects to be edited as characters as an example, the interaction state can specifically refer to the social state of the two characters, such as whether the two characters generate intimate contact (such as handshake, hug, etc.), and the like; taking two video objects to be edited as a person and an object as an example, the interaction state may specifically refer to the interaction state between the person and the object, such as whether the person is in contact with an object, whether to mention an object, think about an object, etc., which is not limited herein specifically.
In the embodiment of the application, the expected interaction scene is flexibly set according to the requirement, for example, different expected interaction scenes can be bound by different appointed editing functions, for example, the expected interaction scene bound by the button function listed below is kissing, and the expected interaction scene bound by the hug function is hugging, etc.
It should be noted that the above-listed expected interaction scenarios are some close-contact related scenarios. Of course, the expected interaction scene is not limited to this, and may be an action scene, such as real fight, small fight, teaching of a certain sport, running together, walking together, etc.; or suspicion scenes, such as that one party generates frightening to the other party, and the person indicates the related objects of the event (such as forensic indication key evidence, etc.), etc.; or campus scenes such as talk and laugh, discussion questions, etc.; or even a work scene such as a fish touch (for people and objects), a careful work (for people and objects), and so forth.
Taking two video objects to be edited as examples, if the expected interaction scene is fight, and when two persons in the current target content are only face to face, the other person can punch a fist, kick the other person by adjusting one direction; in addition, if the expected interaction scene is play and the person in the current target content is not contacted with the pet, the corresponding interaction between the person and the pet can be generated by adjusting the picture information related to the person and the pet, such as adjusting the person to lift the pet, etc.
In addition, it should be noted that, if the interaction state of the video objects to be edited in the current target content already accords with or is very close to the expected interaction scene, the interaction state between the video objects to be edited can be enhanced in the above manner, specifically, the interaction duration can be enhanced, the interaction action amplitude can be increased, and the present invention is not limited specifically.
Each editing function in the present application is an editing function for a picture or audio contained in a target video; the above-described specified editing function may be selected by the target object from a plurality of editing functions, or may be a default editing function.
In the embodiment of the application, only the audio information of the target content can be edited, only the picture information of the target content can be edited, and both the audio information and the picture information of the target content can be edited.
S33: and the client side responds to the sharing operation of the target content and shares the target content with the editing effect.
In the embodiment of the application, the posting of the target content can be directly posted to the video discussion area of the target video, and can also be shared to a third-party social platform. These two cases are described below:
In an alternative implementation manner, the client responds to the discussion group selection and sharing operation of the target content, issues the target content with editing effect into an interactive interface of the target video in the form of social dynamic content, and issues the target content into a discussion area corresponding to the selected designated discussion group.
Wherein each discussion group contains at least two video objects associated with the target video. The at least two video objects may be the video objects to be edited, or may be specified by the target object, or may be recommended by the system, or the like.
In the present application, the video object related to the target video may be a character appearing in the target video, such as a person, an avatar, etc.; but also the creator of the target video, such as director, photographer, etc. Taking the role in video as an example, when watching video, the viewer prefers to pair two or more roles that are loved by himself, a screen lover, a partner, or other related roles, which may be referred to as CP (a form of discussion group).
As another example, the video object related to the target video may also be an animal or plant, other objects, etc. in the video. As in watching video, the viewer may also be interested in one animal and one plant in the video, or in two animals, or in one animal and one flower, etc., and in these cases, the video objects of interest may be paired to form another form of discussion group.
The social dynamic content refers to the dynamic of sharing by taking the target content as the main content, and can also be called posts, and when the posts are posted, some text descriptions and the like can be added as auxiliary content of the target content.
In the embodiment of the application, after the target video is intercepted, simple and quick two-creation can be performed on the intercepted target content, and after the two-creation is completed, the topic label (tag) and the text description of the discussion group can be carried, and posts can be issued to the discussion area.
Considering that the contents of the discussion zones in the related technical schemes are numerous, various topics are involved, and the classification or screening function is not available, so that the object cannot quickly and accurately find out the interesting contents or socialize with other objects with similar interests.
If each CP has an exclusive forum, each group of people and objects has an exclusive forum, each group of animals has an exclusive forum, each group of people and plants has an exclusive forum, each group of animals and objects has an exclusive forum, and the like, which are not listed here.
Optionally, when the interactive interface includes labels corresponding to each discussion group, the target object may also switch the discussion group and the discussion area to be viewed through the labels. Specifically, the client side responds to the label selection operation triggered on the interactive interface and switches to the discussion area of the discussion group corresponding to the selected target label.
Fig. 5A is a schematic diagram of an interactive interface according to an embodiment of the application.
In fig. 5A, 4 labels are displayed in the S51 area, each label corresponds to a discussion group, and the labels are respectively: xx1, xx2, xx3 and xx4. Limited by the screen, only the 4 labels are displayed in the current interactive interface, and the target object can also view more labels which are not displayed currently through left and right sliding.
As shown in the left side of the interactive interface in fig. 5A, the currently displayed discussion area is the discussion area corresponding to the discussion group "xx1", wherein two posts, respectively, "ice lolly" and "long-lasting", are displayed, wherein the post posted by "ice lolly" is completely displayed, and the associated discussion group "xx1" is further displayed at the lower left side S521 of the post.
When the object selects "xx2" as the target label, the interactive interface shown on the left side of fig. 5A can be switched to the interactive interface shown on the right side.
As shown in the right side of fig. 5A, in the interactive interface, the currently displayed discussion area is the discussion area corresponding to the discussion group "xx2", where at S522 below left of the post posted by the completely displayed "deep" night, "the discussion group" xx2 "associated with the post is further displayed.
It should be noted that, the interactive interface in the present application refers to an interface that can interact with other objects in the process of watching video by the target object, such as a discussion interface. The interactive interface can be displayed in the form of a sub-interface (such as a popup window, a floating layer, a picture-in-picture, etc.), or can be an independent interface.
In the embodiment, by adding the two-creation function, the target object can quickly and simply perform the two-time creation aiming at the interaction state of the video object to be edited after intercepting the target video, thereby meeting the creation desire of the target object, giving the target object imagination good space, enriching the content of posts in the discussion area and improving the content quality.
And the contents of the discussion area are divided by taking different discussion groups as topics, so that target objects with interaction requirements aiming at different video objects can be helped to quickly find circles with the same interests as the target objects, and the contents with the most interests are browsed.
Optionally, in the application, when a plurality of target objects are considered to perform secondary creation on the same video picture, the target content which is approximately the same can be obtained based on the same appointed editing function, and in this case, in order to better display the viewpoint expression of each target object aiming at similar scenes while saving server resources and improving the overall efficiency, the related social dynamic content can be clustered and displayed.
For example, in the discussion area, the target content and other content associated with the target content are displayed in clusters, and viewpoint information corresponding to each content is sequentially presented in the viewpoint area.
The other content is other social dynamic content with similarity with the target content exceeding a preset threshold, and specifically may be that the similarity of the included video picture exceeds the preset threshold; the display order of the viewpoint information corresponding to each content is associated with the display order of the target content and other content, and the viewpoint information of each content may be text description information published when the social dynamic content is published, comment information after the social dynamic content is published, or the like, which is not particularly limited herein.
Referring to fig. 5B, a schematic diagram of a discussion area post cluster in an embodiment of the present application is shown. In general, posts posted by each object may be arranged according to a certain rule, for example, according to posting time, which is considered herein, and two post-creation frames with relatively high similarity are aggregated together and presented in the area shown in S53, where the two post-creation frames included in the post recently posted by the target object may be displayed at the forefront end, highlighted, for example, in S531, and further other similar content may be viewed through lateral sliding, for example, S532 is an example of a sliding bar that slides laterally (the sliding bar may also be displayed and hidden) in the embodiment of the present application.
If there are multiple two-creation frames issued by one target object, the post clustering can be performed according to one of the two-creation frames, and the other two-creation frames are folded and displayed under the two-creation frames participating in the post clustering, and if in S531, the other two-creation frames under the two-creation frames can be clicked and checked further.
Further, the viewpoint information corresponding to each content may be displayed in the viewpoint area S54 sequentially according to the display order of each content, where the viewpoint information corresponding to S531 may be displayed at the forefront end of the viewpoint area, as shown in S541, and further viewpoint expressions may be viewed by sliding longitudinally, for example, S542 is an example of a sliding bar that slides longitudinally (the sliding bar may be displayed or hidden) in the embodiment of the present application.
Accordingly, when the object views other two-shot pictures through the lateral sliding, the viewpoint information corresponding to the content can be switched to the forefront of the viewpoint area.
Based on the implementation mode, the view expression of each user aiming at similar scenes can be well displayed while the overall efficiency is improved by saving server resources; in addition, through clustering the posts in the discussion area, the object can conveniently and rapidly find the interesting content of the object, and other objects with similar interests with the object can be more conveniently determined to conduct social contact with the objects.
In another optional implementation manner, the client side responds to the sharing operation of the target content to share the target content with the editing effect to the third-party social platform.
The third-party social platform is different from the client side for playing the target video at present, and is a platform capable of supporting social interaction between objects. In the application, when the object shares the target content after the secondary creation, the object can be published to the video discussion area of the target video playing client, and can also be shared to a third party social platform so as to attract other objects to watch the secondary editing effect and drain to the feature film of the target video for watching.
Fig. 5C is a schematic diagram of a third party social platform according to an embodiment of the present application. The interactive interface illustrated in fig. 5C is a chat software, and the target object may share the edited target content to a third party social platform, such as a circle of friends, a chat session, etc., where friends may browse and discuss each other.
In this case, the client may further jump to the video playing interface in response to a triggering operation of viewing the target video corresponding to the target content through the third-party social platform, and restore to the original video frame by the corresponding editing effect when continuing to play the target content in the video playing interface.
Fig. 5D is a schematic diagram of a video playing process according to an embodiment of the application. If the target object clicks the first picture in the circle of friends listed in fig. 5C, the chat software can jump to the video platform to continue watching the target video corresponding to the picture, and the interface shown in fig. 5D is presented. In this process, the display effect of the original video can be restored by the two-invasive effect of the current frame, such as gradually restoring the picture after two-invasive in fig. 5D to the original video picture, so as to seamlessly connect to the playing feature.
Based on the implementation mode, the two-dimensional result of the target object aiming at the target content in the target video can be effectively combined with an external platform, drainage can be achieved, and interactivity among objects can be further enriched.
In addition, considering that the fragments which are cracked cannot be positioned quickly and accurately in the related technical scheme, the target object needs to take time and energy to search and position by itself, the application also provides the idea of guiding through the playing progress bar, and the specific functions are as follows:
an alternative implementation is that the client terminal responds to the object interaction operation aiming at the target video, and highlights at least one key video clip in a playing progress bar of the target video.
The highlighting mode may be various, for example, a special pattern mark, a special color mark, a special text mark, or a corresponding playing progress bar may be highlighted, enlarged, or bolded, which is not limited herein.
The key video snippets in the application refer to snippets which are favorable for enhancing discussion interaction among objects, such as a certain CP scene, a snippet with high discussion heat, a snippet related to a certain event, a plurality of candidate snippets related to expected interaction scenes, and the like.
The key video snippets may be preconfigured by the system, for example, some snippets with high discussion heat are marked as key video snippets in advance, or some candidate pictures, snippets and the like related to the expected interaction scene are marked as key video snippets; the method can also be set by an object for watching the video, for example, a target object intercepts a target segment for editing in the process of watching the video, the target segment can be recorded to a system side, or the target object intercepts one or more images, one or more video segments can be obtained and recorded to the system side according to respective playing time points of the images (for example, the video segment in a period of time before and after obtaining one image is obtained, or one of several images is used as a starting frame, one end frame is used for obtaining one video segment, and the like), the video segment obtained according to related content intercepted by the object can be used as a key video segment, and the key video segment is also highlighted when other objects watch the target video later; or may be set by the subject viewing the video, etc., without specific limitation herein.
Specifically, for the video clip recorded to the system side according to the operation of the object, if the key video clip related to the target clip does not exist currently, the video clip can be used as a new key video clip; if the related records exist and are identical, deleting the related records at the system side; if there is a correlation and some of them are the same, the video segment may be combined with the correlated video segment to obtain a new key video segment, etc., which is not specifically limited herein.
It should be noted that any one of the pictures in the key video snippets can be adjusted by adopting the specified editing function in the application, so as to obtain the editing effect which accords with the expected interaction scene.
If the target object is before step S31, one or more key video segments in the target video may be checked in the above manner, and then, when step S31 is executed, the target content may be intercepted based on the key video segments, and then steps S32 and S33 may be further executed, and for any picture in the key video segments, the corresponding editing process is performed according to the specified editing function.
For example, when the intercepted target content is a certain scene segment, the interaction state between CPs in the scene segment can be adjusted to be in line with the expected interaction scene effect based on the specified editing function in the application, so as to realize a certain close contact effect, a certain talking effect and the like.
For another example, when the intercepted target content is a related segment of an event, the interaction state among a plurality of objects (such as characters, animals and plants or other objects) related to the event can be adjusted to be in line with the expected interaction scene based on the specified editing function in the application, so as to realize a certain suspicion effect, a certain indication effect and the like.
In addition, each key video snippet in the embodiment of the application is also associated with at least one discussion group, and each discussion group has at least two video objects; based on this, when the key video snippet is highlighted, personalized highlighting may also be performed according to some preference of the target object itself that triggers the object interaction operation.
If the object is more favored by the object to the discussion group 1 and the discussion group 2 related to the object video according to some interactive data analysis of the object A on each large platform, when the object A triggers the object interactive operation aiming at the object video, only the key video fragments related to the discussion group 1 and the discussion group 2 can be highlighted, or each key video fragment contained in the object video is highlighted (for example, the key video fragments are highlighted by yellow marks), and on the basis, the key video fragments related to the discussion group 1 and the discussion group 2 are further highlighted (for example, the key video fragments related to the discussion group 1 and the discussion group 2 are further highlighted by other marks 1 which are distinguished from yellow); for another example, the target object B prefers the discussion group 3 related to the target video, when the target object B triggers the object interaction operation for the target video, only the key video segments related to the discussion group 3 may be highlighted, or each key video segment included in the target video may be highlighted (for example, highlighting with a yellow mark), and further highlighting (for example, further highlighting with other marks 2 which distinguish yellow) the key video segments related to the discussion group 3 may be further highlighted on the basis of this, where other marks 1 and 3 may be the same or different.
It should be noted that the above-listed ways of capturing and highlighting the key video snippets are merely illustrative, and are not limited in this disclosure.
In the application, the target object can trigger the object interaction operation by means of specified gesture triggering, voice triggering, control triggering and the like so as to expand the discussion of the discussion group consisting of the video objects.
The following is mainly exemplified by a control triggering manner:
Specifically, an object interaction control can be added in the video playing interface, for example, in a saccharide-knocked scene, the control can be a CP button which can be opened and closed at any time, a view clue button which can be opened and closed at any time in a detective scene, a concentration state knowing button in a work and study scene, a sports event scene, a view competition point button and the like.
Taking the cracking CP button as an example, under the condition of starting the button, key video clips of some names scene types can be highlighted in the playing progress bar; taking the button of 'looking over clues' as an example, under the condition of opening the button, some key video clips related to clues of case making, indication evidence and the like can be highlighted in the playing progress bar; taking the "know concentration state" button as an example, when the button is turned on, some key video segments related to the event currently processed by the video object, such as office segments, work meeting segments, classroom segments, etc., can be highlighted in the playing progress bar; taking the "view points" button as an example, some highlight points segments, discussion points segments, disputed points segments, etc. may be recommended when the button is turned on.
It should be noted that the above description is merely a simple example of several different scenarios, and other scenarios are equally applicable, and are not listed here.
Taking the named scene as an example, the key video segment may also be called a named scene segment, and the corresponding object interaction control may be a "CP" button, as shown in S43 in fig. 4, under the playing progress bar, on the right side of the bullet screen frame, i.e. a switch button with a "CP cracking" function.
After the target object enters the video playing interface, the button shown in S43 can be clicked to trigger the object interaction operation, and the client responds to the operation to highlight some scene fragments corresponding to the currently played target video in the target video playing progress bar.
In the following, a highlighting mode of a special color mark is taken as an example, for example, color change marks of the named scene fragment are displayed on a playing progress bar.
Referring to fig. 6, a schematic illustration of highlighting a named scene segment according to an embodiment of the application is shown. The 1 st set of the xx television drama takes the target video as the 1 st set, wherein 5 scenes are pre-configured in the 1 st set, and a playing progress bar shown in fig. 6 can be displayed, wherein the playing progress bar corresponding to the scene fragment is highlighted through a white mark, and the playing progress bar is marked as S601-S605 in fig. 6.
In an alternative embodiment, when the preset prompting condition is met, corresponding first prompting information can be presented in the video playing interface.
The first prompt information is used for guiding the target object to execute object interaction operation so as to view key video fragments related to each discussion group in the target video. In order to facilitate guiding the target object, the corresponding first prompt information is presented at a relevant position of the object interaction control, which may be above or below the object interaction control.
If the preset prompting conditions are: and detecting that the target object does not trigger the object interaction operation within a certain period of time for watching the target video. If the target object does not click on the object interaction control within a certain period of time when watching the target video, the object interaction operation is triggered, and the first prompt information can be displayed at the relevant position of the object interaction control.
In the application, if the 'cracking CP' function is not started within the first set time period T1 after the target object enters the video playing interface, the target object can be guided by the first prompt information, and the first prompt information can be in the form of text, picture, animation or graphic combination, etc., which is not limited in detail herein.
Specifically, the first prompt message may not be displayed after a period of time T2 is displayed; or the target object can click any position in the video playing interface to cancel the display of the first prompt information.
In addition, if the target object still does not start the 'cracking CP' function after the first prompt message is not displayed any more, the first prompt message may be presented again in the video playing interface at intervals of a second set period of time T3, and the first prompt message at this time may still not be displayed after a period of time T4 is displayed.
The above-mentioned various durations can be flexibly set according to the experience of the requirement, and are not particularly limited herein. If T2 is set to 10 seconds and T4 is set to 5 seconds, that is, the time available for display is longer when prompting for the first time, and then the time available for display is shorter when prompting for each time. In some examples, T1 may be 0 seconds, that is, when the target object clicks into the target video playing interface to view, that is, the corresponding first prompt information is presented in the video playing interface.
It should be noted that the above-listed prompt conditions are only simple examples, and for example, whether the target object belongs to a crowd with a specific interaction requirement or not may be analyzed according to the portrait information of the target object or the historical behavior before watching the target video, etc., so as to determine whether to present the corresponding first prompt information for prompting, etc.
If the target object initially looks at the short video of the direction of the knocks, then clicks from the short video to the positive film for watching, the target object can be understood as the content of the target object like the direction of the knocks, belongs to the crowd like the knocks CP, and further presents corresponding first prompt information for prompt, and the like.
Fig. 7 is a schematic diagram of a first prompt message in an embodiment of the application. The first prompt message in fig. 7 is in the form of a bubble popup window, and if a bubble S70 appears above the S43 button, the text "open the CP mode, and the CP together with everybody" is displayed, and the target object is prompted to open the "CP" function. The target object may also turn off the bubble hint.
It should be noted that, the above-listed presentation manners of the first prompt information are only illustrative, and any one of the presentation manners is applicable to the embodiments of the present application, and will not be described in detail herein.
In the embodiment of the application, besides the identification of the name scene in the target video on the playing progress bar, the bubble guiding target object can be further arranged to open the discussion area for browsing and interaction.
In an optional implementation manner, a second prompt message is presented at a relevant position of the target key video snippet in the playing progress bar, where the second prompt message is used for guiding the target object to view a discussion area corresponding to the target key video snippet.
The target key video snippet is selected from at least one key video snippet based on the current playing time point of the target video and the playing time of the at least one key video snippet.
For example, a key video clip to be played closest to the current play time point of the target video is taken as a target key video clip, or a key video clip (being played) containing the current play time point of the target video is taken as a target key video clip.
It should be noted that, in the embodiment of the present application, the second prompt information is presented in a similar manner to the first prompt information, and the relevant position may be above or below the color-changing identifier of the target key video segment. And, the second prompting information may be in the form of text, picture, animation or graphic combination, which is not limited herein.
In addition, the second prompt message may not be displayed after the display for a period of time T5; or the target object can click any position in the video playing interface to cancel the display of the second prompt information. And, the second prompt information may be presented again at the relevant position of the target key video snippet at intervals of a third set period of time T6, where the second prompt information may still be not displayed after a period of time T7 is displayed, and so on.
Referring to fig. 8, a schematic diagram of a second prompt message in the form of a bubble popup in the embodiment of the present application is shown in fig. 8. After the target object triggers the object interaction operation and the 'crack CP' mode is started, the color-changing mark of the named scene fragment appears on the playing progress bar, and bubbles appear at the color-changing mark of the next nearest named scene fragment at the current playing time point, as in fig. 8, bubbles appear above the second named scene in the playing progress bar S80, the text 'xxCP princess embracing named scene' is displayed in S80, and a certain picture (such as princess embracing picture screenshot) in the named scene fragment is further displayed on the left side of the text. The target object may also turn off the bubble hint.
Based on the implementation mode, the object is guided to participate in interaction in the discussion area through different prompting modes, so that the interaction method is more efficient and convenient.
Meanwhile, a jump control can be displayed in the second prompt information, and the target object is guided to open the related discussion area through the jump control.
An alternative embodiment is: and the client responds to the triggering operation of the jump control, presents an interactive interface of the target video, and displays a discussion area corresponding to the target key video fragment in the interactive interface.
Wherein, the discussion area corresponding to the target key video snippet is used for: and displaying the social dynamic content corresponding to the target discussion group associated with the target key video snippet.
Because each key video snippet is associated with at least one discussion group in the application, when the target key video snippet is associated with only one discussion group, the discussion group is taken as the target discussion group; when the target key video snippet is associated with a plurality of discussion groups, one discussion group can be selected from the plurality of discussion groups to serve as the target discussion group based on the interaction quantity of the social dynamic content corresponding to each discussion group.
Wherein the number of interactions of the social dynamic content may be determined by the number of endorsements, comments, etc.
Referring to fig. 9, a schematic diagram of a jump control according to an embodiment of the application is shown. In FIG. 9, a portion S91 illustrates an exemplary jump control, which is a "crack" button, that a target object can click on to open a discussion.
In addition, a discussion control can also be highlighted in the video playback interface, through which the target object is directed to open the relevant discussion zone.
An alternative embodiment is: highlighting a discussion control for jumping to the interactive interface in the video playing interface; the target object can be triggered by clicking (double clicking, long pressing, etc.) and the like, and the client responds to the triggering operation of the discussion control, presents the interactive interface of the target video, and displays the discussion area corresponding to the target key video fragment in the interactive interface.
Wherein, the discussion area corresponding to the target key video snippet is used for: and displaying the social dynamic content corresponding to the target discussion group associated with the target key video snippet. The specific determination manner of the target discussion group can be referred to the above embodiments, and the repetition is not repeated.
In embodiments of the present application, there are a variety of ways to highlight the discussion control, such as color change, magnification, highlighting, etc.
Still referring to FIG. 9, a discussion control highlighted by color change, in the form of a button, is shown as S92 in an embodiment of the present application. Under the condition that the 'CP' function is not started, the discussion control is normally displayed, the target object clicks the discussion control which is normally displayed, after the target object enters the interactive interface, the labels of all discussion groups can be displayed, and the discussion area corresponding to the first discussion group is displayed by default. And under the condition of starting the 'CP' function, the discussion control can be highlighted in S92 or other modes, the target object clicks the highlighted discussion control, and after entering the interactive interface, the corresponding discussion area of the target discussion group is directly displayed.
It should be noted that the description is mainly performed from the direction of the CP guiding, and of course, the interactive guiding in other scenes is also applicable, which is not described in detail herein.
In the following, a description will be given of a two-invasive part of the target content in the embodiment of the present application:
According to the method, the device and the system, the problem that the related technology cannot meet the creation desire of a user for the interaction state between at least two video objects, the social dynamic content of an interaction interface is similar and lacks of content with novelty and high quality is solved, and the function of performing secondary creation on the intercepted target content for the interaction state is introduced.
Specifically, in the present application, each editing function is directed to a picture or audio related to a video object in a target video, and if the target object wants to develop an interaction (as discussed) for an interaction state of at least two video objects in a certain picture, a certain video segment, etc. during the process of watching the target video, the editing operation on the target content can be triggered to perform secondary authoring on the target content.
An alternative embodiment, S32 may be implemented according to the following procedure, including the following sub-steps S321 to S323:
s321: at least one editing function is presented in response to an editing operation on the target content.
Referring to fig. 10, a schematic diagram of a two-dimensional interface according to an embodiment of the application is shown. After the target object clicks a screenshot button to screenshot the picture, the target object enters a two-invasive interface, which is also called a two-invasive panel. The second wound surface plate comprises the following editing functions: text, zoom in, twist, press, splice, library, as shown in section S101 of fig. 10. If the target object selects the specified editing function (any one or more of the specified editing functions can be selected) in the S101, an effect page related to the specified editing function can be carried out, an effect control corresponding to the specified editing function is displayed, and then the specified effect control is selected to realize the corresponding editing effect; if the target object selects the posting control shown in S102, the target content may be posted to the discussion area in the form of social dynamic content.
The specific descriptions of these types of editing functions are as follows:
wherein the amplifying function means: the method can amplify the selected part in one or more frames of images (hereinafter referred to as pictures) contained in the pictures and the video clips, and adjust the amplification degree through the slide bars. Splice functions include, but are not limited to: cutting, rotating and jigsaw. The twisting function refers to: different warping effects may be selected to warp the whole or part of the picture. The head pressing function refers to approaching the head of a video object to be edited in a picture; similarly, a hand-in function (to keep the hands of the video object to be edited in the screen in contact), a hug function (e.g., to adjust the video object to be edited in the screen to hug one side by an arm) and the like may also be provided. The text function refers to: the inserted text content can be customized, and fonts, colors, description patterns and the like can be selected. The material library function may provide a variety of materials such as filters, stickers, templates, etc.
In the embodiment of the present application, the basic characteristics of the two-sided panel listed above are classified into magnifier, cutting, image stitching, text-to-image layer, filter sticker, etc. The method can be realized by introducing the basic secondary creation editing capability such as sticker, filter, text, picture adjustment and the like into the image editing capability after the material resource is acquired through the screenshot function of the player, such as third party special effect SDK, photoEditor SDK and the like.
The effect of the picture local magnifier can be realized by adopting the thought of dynamically calculating background-image, and the offset and the display width and height of the magnifier are calculated through information such as magnification, mouse/touch coordinates, width and height of display content, width and height of a picture actual container and the like. Image stitching can be achieved by Canvas by calculating picture coordinate offset rendering, and the like.
It should be noted that the above-listed implementation manners of the basic feature set forth above are merely simple examples, and other manners are equally applicable, and are not described herein in detail.
In addition, the above mentioned editing functions are all aimed at the picture information of the target content, and certain editing can be performed on the audio information of the target content, such as replacing background music, in this way, a "music library" can be added in the two-creation panel listed in fig. 10, so that the target object can select various background music; in addition, image replacement or the like may be performed for the video picture, and is not particularly limited herein.
For example, for the secondary creation in the chasing process, after the material pictures and the video are acquired, the creation of the video type can be completed through a template rendering scheme. For example, using a server video template rendering or Portable animated graphic (Portable ANIMATED GRAPHICS, PAG) active material format, an authored video is rendered by replacing the material such as pictures, audio, etc.
Still alternatively, the editing function for audio information may also refer to: and dubbing one or more video objects in the target video to replace the original audio so as to obtain target content after editing the audio information.
Specifically, when dubbing is performed on a certain/some video objects, the voice of a virtual character provided by the video platform can be selected, and the emotion of the voice is added for dubbing. In this way, a "dubbing library" may be added to the two-creation panel illustrated in fig. 10, so that the target object may select different virtual character sounds. Or dubbing by using the sound of the target object itself.
In addition, under this function, the control unit, the target object may be based on video content the original speech sounds the target content, the target object can also dub according to the self-created speech, and the like, which is not particularly limited herein,
Alternatively, the audio of the target content may be edited by adding a white-out to the target content. Specifically, when the white-out is added, different virtual character sounds can be selected by the target object, and the sound of the target object can also be adopted, so that details are not repeated here.
In the embodiment of the application, when the audio information of at least two video objects to be edited in the target content is edited based on the specified editing function so as to adjust the interaction state between the video objects to be edited, and further the interaction state of the at least two video objects to be edited after editing accords with the expected interaction scene, the speech of at least one of the at least two video objects to be edited can be adjusted.
If the expected interaction scene is a smiling interaction scene, and the speech of the at least two video objects to be edited does not have a smiling atmosphere, audio information related to the video objects to be edited can be enabled to have a more smiling atmosphere by means of re-dubbing at least one of the video objects to be edited or simply adjusting the speech, so that the interaction state of the at least two video objects to be edited after editing accords with the expected interaction scene.
Under the situation of re-dubbing, some proper lines can be re-created to perform dubbing based on the expected interaction scene and other elements (such as the current place where the video object to be edited is located, the current processed matters, the current actions and the like) except the video object to be edited in the current target content; or, adding proper language and gas words and adjusting speaking language and gas ways on the basis of the original speech, and dubbing; etc.
The term adjustment can be based on the original term, and replace part/all of the term of part/all of the video object to be more in line with the content of the expected interaction scene; or based on the expected interaction scene and other elements except the video object to be edited in the current target content, automatically creating some proper lines again; etc.
In the following, the adjustment of the speech is taken as an example to simply explain that two video objects to be edited are respectively an A character and a B character, wherein the A character needs to speak a laugh to adjust atmosphere for the B character, but the speech currently spoken by the A character is laugh scornfully words of a relatively cool door, and the speech of the A character can be changed into a more suitable laugh through adapting the speech of the A character, or can be directly adjusted into a more suitable laugh, and the like.
It should be noted that the above-mentioned editing functions for the picture information and the audio information and the listed effect implementation manners are only simple examples, and in fact, any editing function for the picture information or the audio information and other implementation manners are equally applicable to the embodiments of the present application, and are not described herein in detail.
S322: at least one effect control corresponding to a specified editing function is presented each in response to a selection operation of the specified editing function of the at least one editing function.
S323: and after executing editing processing on the corresponding part in the target content based on the specified effect control, each time the editing operation triggered based on one specified effect control in the at least one effect control is responded, the corresponding editing effect is presented.
The different editing functions listed above may each correspond to one or more effect controls. The following describes effect controls corresponding to various editing functions respectively with reference to the accompanying drawings:
And (one) a text function.
The text function in the application refers to text content which can be inserted in a user-defined manner in a picture contained in target content, and can specifically select the font, color, tracing style and the like of the text.
After the target object clicks the text button in the two-creation panel shown in fig. 10, the editing function of "text" is selected, and then the page shown in fig. 11A is presented, which is a font effect page in the embodiment of the present application, and some text font effect controls are displayed in the page, for example, S111, where the selected font is "default" in the initial state, that is, a default font (for example, a regular script), and the target object may also be selected to switch to other fonts, for example: round, fine black, thick black, handwriting, etc.
The target object may input the text content to be inserted in the S112 portion, click on the right side of the S112 portion after the input is completed, and insert the text content to be input, the inserted text content may be displayed in the S113 portion of the screen, and the target object may adjust the position, size, angle, etc. of the text content through the S113 portion, or delete the text content.
Or some text templates can be provided, such as 'the main character angle is too beautiful', and the like, and the text templates are selected or edited secondarily by the object.
Further, the target object may also be switched to a color effect page, as shown in fig. 11B, where some text color effect controls are displayed, as shown in part S114 in the page listed in fig. 11B, which is a part of font color examples listed in the embodiment of the present application.
It should be noted that, due to the requirements of the format of the drawing, in fig. 11B, other different font colors are represented by filling patterns of different styles besides white and black, for example, the first square in S114 represents white, the second square represents black, the third square represents red, the fourth square represents blue, the fifth square represents green, and so on.
Further, the target object may also switch to a stroked effect page, as shown in FIG. 11C, in which two font stroked effect controls are displayed, such as "none stroked", "stroked" in FIG. 11C, with the "none stroked" effect currently selected.
After the editing function based on the word is finished, the finish button can be clicked, the current editing effect is maintained, and the editing function returns to the two-creation interface shown in fig. 10. The target object can continue to select other editing functions on the basis, or select to release target content, save to album, share and the like.
It should be noted that any of the above-listed text functions may be used in addition to the specified editing function, for example, after the object edits the target content based on the specified editing function, the target content may be further subjected to a related process based on the text function. Or the target content can be processed based on the text function, and then processed based on the specified editing function, and the like.
And (II) an amplifying function.
The amplifying function in the application supports the amplifying of the local area in the picture.
After the target object clicks the zoom-in button in the two-creation panel shown in fig. 10, the editing function of "zoom-in" is selected, and then the page shown in fig. 12 is presented, which is a schematic diagram of a zoom-in effect page in the embodiment of the present application. The slide bar illustrated in the section S121 in fig. 12 is an effect control related to an amplifying function in the embodiment of the present application, and the amplifying factor can be adjusted by pulling the bottom slide bar, where the range is 1-10×, and represents 1-10 times, for example, the current amplifying factor is 6 times; the magnifier shown in section S122 may also be adjustable in size and position.
It should be noted that the above-mentioned amplifying function may be used in addition to the specified editing function, for example, after the object edits the target content based on the specified editing function, the target content may be further subjected to the related processing based on the above-mentioned amplifying function. Or the target content can be processed based on the amplifying function, and then processed based on the designated editing function, and the like.
And (III) a twisting function.
The twisting function in the present application supports the selection of different twisting effects such as mirror effects, fisheye effects, swirl effects, cylindrical effects, ripple effects, etc.
After the target object clicks the warp button in the two-creation panel shown in fig. 10, it indicates that the edit function of "warp" is selected. When the object selects the mirror effect under the distortion function (the mirror effect can be selected by default after clicking the distortion in the present application, or manually selected by the target object), a page as shown in fig. 13A is presented, which is a schematic diagram of the mirror effect in the embodiment of the present application, and indicates that the image is mirrored, so that the effect diagram as shown in fig. 13A can be obtained.
On this basis, the target object may also switch other distortion effects, such as a fish-eye effect, and then present the page as shown in fig. 13B. Fig. 13B is a schematic diagram showing a fish-eye effect according to an embodiment of the application. The fish-eye effect is represented by the fact that in a circular 'fish-eye' area, the closer to the center of the circle, the more the pixels are shifted. If the fish-eye effect processing is performed on a certain part of the screen, an effect diagram as shown in fig. 13B can be obtained.
In the present application, other effects such as vortexes, cylindrical surfaces, corrugations and the like are similar, and will not be described in detail herein.
It should be noted that, for the sake of simplicity and clarity, the above-mentioned related schematic diagrams of several editing functions are described by taking an example that only one video object is included in the image, and the same processing manner is also adopted when a plurality of video objects are included in the image, which is not repeated herein. And these editing functions listed above can be used to edit the target content before or after adjusting the interaction state of the video object to be edited in the target content based on the specified editing function, which is not limited herein.
It should be noted that any of the above-listed warping functions may be used in addition to the specified editing function, for example, after the object edits the target content based on the specified editing function, the target content may be further subjected to a correlation process based on the above-mentioned warping function. Or the target content can be processed based on the distortion function, then processed based on the specified editing function, and the like.
And (IV) a button head function.
The button head function in the application provides two effects of fine repair and joke. The fine edition needs to scratch out the characters in the picture and move the position; the smiling version indicates that the fingers can pull the body parts of the figures in the picture to twist them together.
As shown in fig. 14A, after the target object clicks the button for pressing the head in the two-creation panel shown in fig. 14A, the editing function of "pressing the head" is selected, and two effect controls, namely, a fine repair control and a smiling control, are corresponding. When the object selects the finishing effect under the function of pressing the head (the finishing effect can be selected by default after clicking the head in the application, or manually selected by the target object), a schematic diagram of the finishing effect shown in the lower part of fig. 14A can be presented. The head pressing effect of the finishing edition is obtained by carrying out character matting and background preservation processing on a picture, and the target object can further adjust the size, the position, the angle and the like of the character on the basis.
Here, the button function is merely an example, and similarly, a hug function, a hand pulling function, and the like may be also available, and these may be collectively referred to as an editing function for designating a portion of a video object.
The above listed head-pressing functions generally belong to editing functions aiming at specified positions of video objects, and the editing functions can be used as specified editing functions in the embodiment of the application, so that the interaction states of at least two video objects to be edited can be adjusted to meet expected interaction scenes.
In an alternative embodiment, when the specified editing function is an editing function for a specified portion of the video object, step S323 is implemented as follows:
The client responds to editing operation triggered by the finishing effect control in at least one effect control, identifies at least two video objects to be edited in target content, moves at least one of the at least two video objects to be edited according to relevant parts based on the finishing effect control, and presents the editing effect of the finishing edition.
For the object to be edited, the relevant parts refer to one or more editable parts conforming to the expected interaction scene, such as a head with a button function, a hand with a hand in hand function, an upper limb and an upper body with a hug function, and the like. In addition, the relevant portion may also refer to each portion of the object to be edited, that is, the entire object to be edited.
Taking the head pressing function as an example, the relevant part of the video object to be adjusted corresponding to the head pressing function is the head text of the video object, the appointed discussion group corresponding to the target content is taken as the video object to be edited, if the target content is intercepted by the target object, and the appointed discussion group corresponding to the video object to be edited and the target content is taken as the CP. Since "pressing the head" herein means approaching the head, at least one person in the CP can be moved in such a way that the heads approach each other.
In the embodiment of the present application, when at least one of the at least two video objects to be edited is moved according to the relevant location, an alternative implementation manner is as follows:
And moving at least one video object to be edited, adjusting the distance between target areas in the relevant parts of each video object to be edited to a target distance range, and displaying the editing effect of the fine-repair edition.
The target area may be part or all of the relevant part, such as a lip in the head, an entire upper limb, or a portion of the object, the entire object, a flower of the plant, petals of the flower, etc., according to the actual situation.
In the above embodiment, when at least one video object to be edited is moved according to the relevant portion, a moving target may be set, that is, a target area (e.g., a lip under the click function) in the relevant portion, and when the distance between the target areas in the relevant portion of each video object to be edited is within the target distance range by the movement, the editing may be completed.
Specifically, in the case that the target object selects the finishing effect control (e.g., the "finishing version" below the screenshot in fig. 14A), the video object to be edited, such as the main angle of a man or a woman, is identified, and then the editing function is to press the main angle of the man or woman, and then move the main angle of the man or woman according to the direction in which the heads approach until the distance between the lips in the heads of both sides is smaller than the preset distance, or the lips in the heads of both sides are contacted, or the like, where specific conditions can be flexibly set, and the method is not specifically limited herein.
Fig. 14B is a schematic diagram of the effect of the joking plate pressing head in the embodiment of the present application. Fig. 14B shows the result of keeping the position of the person in the picture stationary, the lips connected and blurring.
An alternative embodiment is to implement step S323 by:
The client side responds to editing operation triggered by the smiling effect control in at least one effect control, identifies at least two video objects to be edited in target content, connects relevant parts in the at least two video objects to be edited based on the smiling effect control, and presents the editing effect of the smiling edition.
Specifically, compared with the finishing edition, the smiling edition is not required to move the video object to be edited according to the relevant parts, but the position of the video object to be edited in the picture is kept unchanged, the relevant parts in the video object to be edited are directly connected, and namely the relevant parts in the video object to be edited are visually connected.
Alternatively, the connection of the relevant parts in the video object to be edited can be achieved by the following way:
and twisting the target area in the relevant part of at least one video object to be edited so as to be connected with the relevant parts of other video objects to be edited, and presenting the editing effect of the fuzzing edition.
The twisting may be pushing, pulling, rotating, extruding, expanding, or the like, and is not particularly limited herein.
In the above embodiment, a connection target may be provided, where the connection target may be a target area (such as a lip under the function of a button) in a relevant portion, and then, by twisting the target area in the relevant portion of at least one video object to be edited, the relevant portion of each video object to be edited may be connected to complete the editing.
As shown in fig. 14B, lips in the head parts of both the main corners of the male and female are twisted and stretched to a certain extent until the lips are connected, and a certain blurring process is performed to achieve a strange kissing effect. In addition, only the lip region in the main corner head of the man may be subjected to twisting stretching or the like, and the present invention is not particularly limited.
When a certain region is distorted, the image region can be realized through a liquefaction transformation technology, and the liquefaction transformation technology supports pushing, pulling, rotating, extruding or expanding the image region. In addition, other techniques for achieving the twisting effect are also possible, and are not particularly limited herein.
In the above embodiment, the editing of the corresponding effect can be achieved by setting the moving object or the connection object based on only the target area in at least one video object to be edited, and the processing procedure is simpler.
One implementation of the above-described push button function is briefly described below with reference to fig. 14C:
referring to fig. 14C, a logic diagram for implementing a push button function in an embodiment of the present application is shown. The button head function of the application is mainly divided into a fine repair plate and a smile plate.
The fine modification mainly utilizes portrait segmentation capability to obtain a role image layer in the screenshot, and achieves a head pressing effect through basic editing operations such as dragging and rotating. Portrait segmentation may use cloud product capabilities, such as Portrait segmentation product (Segment Portrait) under a cloud product. An image segmentation model, such as PPSeg, can also be built, and a World Wide Web (specifically, a client side) end can be deployed through a pad. Js, a TensorFlow. Js, a Web graphic library (Web Graphics Library, webGL), and the like.
The laughter edition mainly realizes kissing of the CP at the proper position in the screenshot through liquefaction capability. The function takes the person spacing, the orientation and the lip position as important indexes, and judges the person object to be operated (namely the video object to be edited) in the complex scene through the auxiliary tool identification. When the material meets the conditions, the lip liquefaction transformation is carried out on the portrait to achieve a strange kissing effect.
It should be noted that the button functions listed here are only simple and illustrative, and similar functions may also include a hug function, a hand pulling function, etc., and may also be specifically classified into a fine repair version, a lazy version, etc., which are not described in detail herein.
The above is exemplified by the target object selecting an editing function for a specified portion of the video object from one or a plurality of editing functions. Under the condition that the object starts the CP-colliding mode, the default editing function can be the editing function aiming at the appointed part of the video object, namely:
If the client has responded to the object interaction operation before responding to the editing operation on the target content, the CP mode is turned on, and step S32 may be implemented in any one of the following manners:
In the first mode, at least two video objects to be edited in the target content are identified in response to the editing operation of the target content, at least one of the at least two video objects to be edited is moved according to the relevant part, and the editing effect of the fine modification is presented.
Optionally, when at least one of the at least two video objects to be edited is moved according to the relevant location, an alternative embodiment is as follows:
And moving at least one video object to be edited, adjusting the distance between target areas in the relevant parts of each video object to be edited to a target distance range, and displaying the editing effect of the fine-repair edition.
The method indicates that after the target object starts the CP-colliding mode, the target content is intercepted, and when the target object is wanted to be edited, the video object to be edited in the target content is edited by default to implement the fine editing effect, for example, "fine editing" in fig. 14A, the specific editing process can be referred to the above embodiments, and the repetition is not repeated.
And in a second mode, identifying at least two video objects to be edited in the target content in response to the editing operation of the target content, connecting relevant parts in the at least two video objects to be edited, and presenting the editing effect of the laughty edition.
Alternatively, the connection of the relevant parts in the video object to be edited can be achieved by the following way:
and twisting the target area in the relevant part of at least one video object to be edited so as to be connected with the relevant parts of other video objects to be edited, and presenting the editing effect of the fuzzing edition.
The method indicates that after the target object starts the CP-colliding mode, the target content is intercepted, and when the target object wants to edit, the video object to be edited in the target content is edited by default to implement the effect of making a plate, such as "making a plate" in fig. 14B, and the specific editing process can refer to the above embodiment, and the repetition is not repeated.
Or after responding to the editing operation of the target content, the related effect control is presented, and the target object selects to adopt the finishing effect control or to joke the finishing effect control to carry out corresponding editing processing.
It should be noted that, the foregoing is exemplified by the expected interaction scenario of the "touch" type, such as the hand-in function, the hug function, and the like.
Taking the hand pulling function as an example, the corresponding object to be edited comprises two characters, and the hand pulling function of the finishing edition can adjust the distance between target areas (such as fingers) in at least one hand of each character to a target distance range by moving at least one character, so that the editing effect of the finishing edition is presented. For example, by moving one of the characters, the finger of the left hand of the character is brought into contact with the finger of the right hand of the other character.
The hand pulling function of the smiling edition can be used for connecting with the hands of other people by twisting a target area (such as fingers) in the hands of at least one person, so that the editing effect of the smiling edition is presented. Such as by stretching the fingers of the left hand of one person to connect with the fingers of the right hand of the other person.
Taking the hug function as an example, the corresponding object to be edited comprises a person and an animal, and the hug function of the fine repair version can enable the distance between the arm of the person and the animal to be adjusted to the target distance range by moving the person, so that the editing effect of the fine repair version is presented. For another example, the distance between the arm of the person and the animal can be adjusted to the target distance range by moving the animal, or moving the person and the animal, etc.
The hugging function of the joking edition can be used for connecting the arms of the person with the body of the animal by twisting the arms of the person, so that the editing effect of the joking edition is displayed. For example, the arm of the person may be connected to the body of the animal by twisting any area of any body part of the animal, or the like.
Furthermore, the present application is not limited to this type of expected interaction scenario, but may also be the action scenario, suspense scenario, campus scenario, work scenario, etc. listed above.
And if the fight function is performed in the action scene, the corresponding object to be edited comprises a person and fight equipment. The fight function of the fine repair plate can adjust the distance between the palm of the person and a target area (such as a sword handle) of fight equipment to a target distance range by moving the person, and the editing effect of the fine repair plate is presented. For another example, the distance between the palm of the person and the target area (e.g., the sisal handle) of the fight device may be adjusted to a target distance range by moving the fight device; or the person and the fight device can be moved at the same time.
The fight function of the fight edition can be realized by twisting the palm of the person so as to be connected with a target area (such as a sword handle) of the fight equipment, so that the editing effect of the fight edition is displayed. For another example, twisting a target area (e.g., a sword handle) of the fight device to connect with the palm of the person; and then or simultaneously twisting the target area of the fight device and the palm of the person to make the connection, etc.
As another example, the indication function in suspense scenes, the corresponding object to be edited includes a scout and a key evidence. The indication function of the fine repair version can adjust the distance between the detected finger and the key evidence to the target distance range by moving the detection, and the editing effect of the fine repair version is presented. For another example, the distance between the key evidence and the detected finger can be adjusted to the target distance range by moving the key evidence; or may also move both forensics and critical evidence, etc.
The indication function of the smiling edition can be used for displaying the editing effect of the smiling edition by twisting the detected finger so as to be connected with a target area (any area) of the key evidence. For another example, the target area of the key evidence is warped to connect with the finger in the scout; the target area of key evidence and the scout finger are then or simultaneously twisted to make a join or the like.
For example, the learning function in the campus scene, and the corresponding objects to be edited include students and textbooks. The learning function of the fine repair edition can adjust the distance between the hands of the students and the target area (any area) of the textbook to the target distance range by moving the students, and the editing effect of the fine repair edition is presented. For another example, the distance between the hands of the students and the target area of the textbook can be adjusted to a target distance range by moving the textbook; or the students and textbooks and the like can be moved at the same time.
The learning function of the joking edition can be used for connecting the student's hand with a target area (any area) of the textbook by twisting the student's hand, so that the editing effect of the joking edition is presented. As another example, the target area of the textbook is distorted to connect with the student's hand; and then or simultaneously twisting the target area of the textbook and the student's hand to make the connection, etc.
In addition, in the learning scenario, the object to be edited may be a plurality of students, teachers, and the like, and the editing manner under the corresponding learning function is similar to the above, that is, by moving the students or the teachers, or twisting the target area of the relevant part in the students or the teachers (for example, the hands of the teacher are put on the shoulders of the students), so as to achieve the corresponding editing effect, which is not listed here.
For example, the office function in the working scene takes the corresponding video object to be edited as a staff member and a computer as an example, and the office function of the fine repair version can adjust the distance between the arm of the staff member and a computer keyboard (also can be a mouse and the like) to a target distance range by moving the staff member, so that the editing effect of the fine repair version is presented. For another example, the distance between the head of the staff and the computer keyboard can be adjusted to the target distance range by moving the computer; or the staff member and the computer can be moved at the same time.
The office function of the joking edition can be realized by twisting the arms of staff to be connected with a computer keyboard, so that the editing effect of the joking edition is displayed. For another example, the computer keyboard is twisted to connect with the arm of the staff; and then or simultaneously twisting the computer keyboard and the arm of the staff member to connect, etc.
Fig. 15 is a schematic diagram showing the corresponding effect of a function of working in a lazy edition in the embodiment of the present application, taking an expected interaction scenario as an example of working, staff in the currently intercepted target content (screenshot) is used as a mobile phone before playing with a computer, staff and the computer are video objects to be edited, and no interaction exists between the staff and the computer before the staff and the computer.
After clicking the "office" button, the corresponding designated editing function can be triggered to adjust the arm of the staff member, as shown in the lower part of fig. 15, so that the staff member is connected with the keyboard of the computer, thereby realizing the interaction state of serious work.
It should be noted that, the above-listed specified editing functions and corresponding editing processes are only examples, and the present application is not limited thereto, and any editing process conforming to the expected interaction scenario is applicable to the embodiments of the present application, and will not be described herein.
In the above embodiment, if the object starts the CP mode, the object CP editing function is more convenient to select by default to edit the target content accordingly, so that the process of selecting by the target object is omitted, and the operation is simpler and more convenient.
And (V) a splicing function.
The splicing function in the embodiment of the application provides three effects of cutting, rotating and splicing. The jigsaw effect has an entry for uploading the local pictures, so that the target object can carry out jigsaw on a plurality of pictures.
After the target object clicks the splice button in the two-creation panel shown in fig. 10, the "splice" editing function is selected, where the function corresponds to three effects of cutting, rotation and jigsaw, and each effect is also provided with some selectable effect controls, and the following expansion description is given for various effects respectively:
(1) Splice-cut:
Referring to fig. 16A, which is a schematic diagram of a clipping effect page according to an embodiment of the present application, the clipping effect of "free format" is taken as an example, and in addition, the clipping effect page is further provided with: square, 9:16, 4:5, 5:7, etc.
(2) Splice_rotation:
Referring to fig. 16B, which is a schematic diagram of a rotation effect page according to an embodiment of the present application, 4 rotation effects are provided herein, and in addition to this, the rotation effect of "flip up and down" is taken as an example: left turn 45 °, right turn 45 °, left-right turn, etc. Wherein, 45 degrees left turn and 45 degrees right turn can be clicked continuously.
(3) Splice_jigsaw:
Referring to fig. 16C, a schematic diagram of a jigsaw effect page (i.e., a jigsaw interface) according to an embodiment of the present application is shown, in which a plurality of different jigsaw templates are provided, such as 11 templates shown at the bottom of the page in fig. 16C, and the currently selected jigsaw template is the 4 th template.
It should be noted that, for the sake of simplicity and clarity, the above-mentioned related schematic diagrams of several editing functions are described by taking an example that only one video object is included in the image, and the same processing manner is also adopted when a plurality of video objects are included in the image, which is not repeated herein. And these editing functions listed above can be used to edit the target content before or after adjusting the interaction state of the video object to be edited in the target content based on the specified editing function, which is not limited herein.
In an alternative embodiment, when the target content is a screenshot, in addition to editing the target content according to the specified editing function based on the above manner, a jigsaw may be further performed based on the stitching function, and one possible embodiment is as follows:
And the client responds to the editing operation triggered by the jigsaw effect control in the at least one effect control, and presents a jigsaw interface containing target content with the editing effect.
The jigsaw in the application not only can splice different pictures in the currently played target video intercepted by the target object, but also can select pictures from a local map library by the target object, or the video platform recommends other relevant video pictures based on the target content, such as some scene pictures related to the video object in the target content, so as to be added by the target object.
Specifically, the client side is used for splicing at least one newly added picture to be spliced with target content in response to picture adding operation aiming at a jigsaw interface; each picture to be stitched is another video picture selected from a local library or recommended based on the target content.
Taking adding a local picture as an example, assume that the target content with editing effect is a screenshot with a corresponding editing effect of a "press-head function" with a strange version listed in fig. 14B. Referring to fig. 16D, which is a schematic diagram of another jigsaw effect page according to an embodiment of the present application, after selecting the 10 th jigsaw template of the 11 templates shown at the bottom of the page in fig. 16D, the target object uses a screenshot with the corresponding editing effect of "press head function" of the strange edition as the first image, and may further upload the local image to make a jigsaw, for example, click "+" in fig. 16D to add more local images.
In the above embodiment, by the splicing function, different pictures in the currently played target video may be spliced, or different pictures in the currently played target video and local pictures in the local gallery may be spliced, so that the personal characteristics of the object for creating the target content may be enhanced, and thus new and high-quality content may be created for discussion.
It should be noted that any of the above-listed splicing functions may be used in combination with the specified editing function, for example, after the object edits the target content based on the specified editing function, the target content may be further subjected to a related process based on the above-mentioned splicing function. Or the target content can be processed in a related mode based on the splicing function, and then processed based on the specified editing function, and the like.
And (six) a material library.
The material library listed in the present application provides 2 types of material: a filter and a sticker. The target object may select different filter effects and add various stickers to the picture.
(1) Material library_filter:
Fig. 17A is a schematic diagram of a filter effect page according to an embodiment of the application. In addition to the original sheet, fig. 17A also illustrates several different filter effects such as vivid, contrasting, cold, black and white, and the target object can select different filter effects and can be switched in a sliding manner.
Among them, the currently selected one listed in fig. 17A is "contrast color". Because of the requirements of the format of the drawings, the patterns of different styles are filled in to characterize different filter effects in fig. 17A for reference only.
(2) Material library_decal:
Fig. 17B is a schematic diagram of a sticker effect page according to an embodiment of the application. The bottom of the page in fig. 17A lists several different sticker effects, and the target object may select different sticker effects and may slide left and right.
When the 6 th sticker is selected by the target object, another sticker effect page as shown in fig. 17C is presented, and the sticker is added to the picture (if the center position of the picture is default, the sticker can be set to other positions), and in addition, the position, the size, the angle and the like of the target object can be adjusted after the target object selects the sticker, which is not described in detail herein.
Similarly, any of the above-listed material library functions may be used in combination with the specified editing function, for example, after the object edits the target content based on the specified editing function, the relevant processing may be further performed on the target content based on the above-mentioned material library function. Or the related processing can be performed on the target content based on the material library function, and then the processing can be performed based on the specified editing function.
It should be noted that the above-listed editing functions and related effect controls are only simple examples, and any one of them is applicable to the embodiments of the present application, and will not be described in detail herein.
The following is an example in which after the target content is edited accordingly with a specified editing function based on "push_make" listed in fig. 14B, other editing functions are superimposed.
Fig. 18A is a schematic diagram showing the effect of overlapping the specified editing function with other editing functions in the embodiment of the present application. After editing based on the specified editing function is completed, the target object may click on the "complete" button, maintaining the current editing effect, as in the (a) effect example in fig. 18A. On this basis, the target object may return to the two-dimensional interface shown in fig. 10, and continue to select other editing functions, such as selecting a mirror effect in the warping function, resulting in an effect as shown in (b) of fig. 18A. Further, the target object can select the "too good cheer" character of the character function added handwriting on the basis, and the character is knocked to obtain the effect shown in (c) in fig. 18A.
Fig. 18B is a schematic diagram of the effect of overlapping another specified editing function with other editing functions in the embodiment of the present application. On the basis of the (a) effect example, the object may continue to select the filter effect in the gallery function, resulting in the effect shown in (d) in fig. 18B. On this basis, the target object can select the flip up and down in the effect of the rotation of the stitching function, resulting in the effect shown in (e) in fig. 18B. Further, the target object can select a material library function sticker on the basis of the result, and the effect is shown as f) in fig. 18B.
It should be noted that, the effects of overlapping the specified editing function and other editing functions are only simple examples, and any manner of overlapping the specified editing function and other editing functions is actually applicable to the embodiments of the present application, and will not be described in detail herein.
In the embodiment of the application, after the second creation of the screenshot is completed, the target object can upload more pictures from the local and perform the second creation operation on the uploaded pictures. After the picture editing is completed, the target object can perform text description on the picture. Before posting a post, the target object needs to select a corresponding discussion group topic for the post, and the post can be posted to a corresponding discussion area.
Fig. 19 is a schematic diagram of a picture uploading process according to an embodiment of the application. After the target object edits the screenshot in the manner shown in fig. 14B, clicking on "post to discussion area" can jump to the interface shown in the lower part of fig. 19 to support adding text, labels, pictures, and other objects.
The labels, i.e. topic labels corresponding to the discussion groups, may also be denoted as names of the discussion groups, and the labels listed in fig. 19 are: xx1, xx2, xx3 and xx4, i.e. represent 4 discussion groups. The target object must select the tag of the corresponding forum to which the post is to be sent before posting, as selected in fig. 19 as "xx1". After the post is edited, the "post" button can be clicked, and the post is posted to the discussion area corresponding to the designated discussion group "xx1".
The following illustrates presentation logic for discussion area posts in an embodiment of the present application:
in the embodiment of the application, the display modes of the discussion area posts include but are not limited to the following 3 modes: up-to-date, hot, current.
That is, each discussion includes, but is not limited to, at least one of the following content presentation modes:
(1) The "latest" mode: and displaying all the social dynamic contents which are related to the discourse and aim at the target video according to the posting time of the social dynamic contents.
I.e., all posts showing the episode being watched from new to old by the posting time of the posts.
(2) "Hot" mode: and displaying all the social dynamic contents which are related to the discourse and aim at the target video according to the interaction quantity of the social dynamic contents.
The interaction number is exemplified by the total number of comments and praise numbers, namely, all posts of the episode being watched are shown from high to low according to the total number of comments and praise numbers of the posts.
(3) The "current" mode: and displaying the social dynamic content which is related to the discussion area and is released in a set time period before and after the current playing time point of the target video according to the release time or the interaction quantity of the social dynamic content.
Taking the time length set as the time length from 1 minute before to 1 minute after the current playing time point as an example, namely showing posts in the time length from 1 minute before to 1 minute after the playing time point of the progress bar, the posts can be shown from high to low according to the total number of comments and praise numbers, and can also be shown from new to old according to the release time.
The content presentation modes listed above can be switched at will.
After the target content with editing effect is successfully posted in the form of social dynamic content (posts), a posting success message box (such as a class of message boxes in the android system) can be displayed, the ordering is switched to the latest, and the posts just posted by the target object can be displayed.
The method is characterized in that the top is a View, a small amount of information can be displayed quickly, information can be displayed in a floating mode on an application program, focus can not be obtained, and input or other operations of a target object are not affected.
Fig. 20A is a schematic diagram of an interactive interface according to an embodiment of the application. The interactive interface is displayed on the right side of the video playing interface in a floating layer mode, and a top of the discussion release success is displayed. And the discussion area corresponding to the specified discussion group xx1 is shown in fig. 20A, wherein the posts published by the target object are presented in the "latest" module in the discussion area, the posts in the module are related to the target video (such as xx drama 1 st set), and the display sequence is shown corresponding to all posts with labels xx 1: the presentation is shown from new to old according to the posting time.
After the target object clicks and switches to the "hot" module, the interactive interface shown in fig. 20B may be displayed. Similar to fig. 20A, the discussion area corresponding to the specified discussion group still xx1 shown in the interactive interface is different from the content display mode, and fig. 20B adopts a "hot" mode, where the posts in fig. 20B are related to the target video (such as xx drama 1 st set), and the display sequence corresponds to all posts labeled xx 1: and displaying from high to low according to the interaction quantity of the posts.
In the above embodiment, the display modes of the discussion areas corresponding to each discussion group are divided into "hottest", "newest", and "current", so that the target object can obtain the content more efficiently according to the requirement.
It should be noted that, in the present application, different discussion groups are divided into a plurality of different discussion areas by taking different discussion groups as topics, if the target object does not see the favorite discussion group in the existing video object combination, a new discussion group can be created by itself, and the corresponding discussion area can be obtained.
An alternative implementation is to create a new discussion group and discussion area by:
The method comprises the steps that a client side responds to a forum creation operation triggered through an interactive interface, and a plurality of video objects related to a target video are presented; and further, the client responds to the selection of at least two target video objects in the plurality of video objects and the discussion group naming operation, and displays a newly added discussion area corresponding to a discussion group formed by the at least two target video objects in the interactive interface.
Referring to FIG. 21, a schematic diagram of adding a discussion group page is shown in an embodiment of the present application. For example, when the target object clicks the "+add" button (a new forum button) on the right side of the forum tag class shown in fig. 20B, the page shown in fig. 21 may be rendered in a pop-up window form to create a new forum.
In fig. 21, some optional video objects (taking roles as examples) are listed, such as: quotient formation, praise, xiaoyi, liu, yu, etc. The target object can select 2 to form a new CP, and the discussion area can be newly established after naming the discussion area and selecting two roles in the CP.
It should be noted that, in fig. 21, a character is taken as an example, specifically, animals, plants or other objects (such as textbooks, computers, evidence, etc.) in the video may be selected, which is not limited herein.
It should be noted that the optional 2 roles set in fig. 21 are only an optional setting manner, and may be set to other specific values not smaller than 2, such as 3, or a range not smaller than 2, such as 2-5, etc., which are not limited herein.
As shown in fig. 22, which is a schematic diagram of selecting a video object and naming a discussion group in the embodiment of the present application, the target object is selected to be two video objects of quotient and Liu, and named as "sweet", and after clicking the "add" button, the interface shown in fig. 23A can be skipped, which is a schematic diagram of creating a success prompt in the embodiment of the present application, and in fig. 23A, the "add CP success" is prompted, and the discussion area corresponding to "sweet" is added in the interactive interface. Because the video forum is newly created, there is no post in the forum.
In the above embodiment, the target object may create the discussion group and the corresponding discussion area by itself under the condition that the existing discussion group does not have the discussion group which wants to develop the interaction, so as to enrich the interaction content.
Optionally, after the target object selects the video object to form a new discussion group, before displaying a new discussion area corresponding to the new formed discussion group in the interactive interface, whether the discussion group same as the newly created discussion group exists currently needs to be analyzed, if the role selected during creation already has a corresponding CP discussion area, the new discussion group cannot be created again, and in this case, a corresponding creation failure prompt may be displayed in the video playing interface.
Referring to fig. 23B, a schematic diagram of a creation failure hint according to an embodiment of the present application is shown. After the target object selects two video objects of quotient formation and suspense and is named as xx1, and the adding button is clicked, the current CP is prompted by verifying that a discussion group consisting of the two video objects of quotient formation and suspense is present (whether the names of the discussion groups are consistent or not).
In the embodiment, the verification is performed through the selected video object, so that the situation that the newly-built discussion area is repeated with the existing discussion area can be effectively avoided.
It should be noted that, the creation process of the discussion group is taking the video object as the video character as an example, and in addition, the creation process may also be the specific creation process of the above-listed animals, plants, other objects, etc. which is the same as the above, and the repetition is omitted.
In the embodiment of the application, in order to further enrich the video interactivity, the following ideas are also proposed: when the number of posts related to a certain discussion area (i.e. posts taking a discussion group corresponding to the video discussion area as a topic label) reaches the official prescribed number, the additional multimedia resources related to the discussion group, namely, colored eggs, can be unlocked, and specifically, colored egg videos (such as CP materials such as CP sole batting/business videos, etc., or interactive battles of characters and animals, interactive battles of animals and plants, interactive battles of characters or animals and plants and objects, etc.), colored egg audios (such as sole audios of CP, blessing, or interactive audios of characters and animals and plants, interactive audios of characters or animals and plants, interactive audios of animals and plants, etc.), and the like can be selected.
An alternative embodiment is: after the client determines that the total number of the social dynamic contents related to the current discussion group reaches the specified number, unlocking the additional multimedia resources related to the current discussion group; when the target object views the additional multimedia resource, the client plays the additional multimedia resource in response to the viewing operation of the additional multimedia resource.
The current discussion group may be any discussion group, specifically, a discussion group corresponding to a discussion area currently displayed in the discussion interface, and in fig. 24, the current discussion group is "xx1".
In the embodiment of the application, when the post number of a certain pair of CP forums reaches the official specified number (namely the specified number), the target object can unlock the colored eggs of the CP. The number of posts required to unlock the colored eggs and the specific form of the colored eggs may be determined by the episode authority and are not specifically limited herein.
Optionally, a quantity progress bar corresponding to the total quantity of social dynamic content related to the current discussion group may be displayed in the interactive interface, where the total length of the quantity progress bar is determined based on the specified quantity, as shown in fig. 24, the specified quantity is set to 1000, that is, when the number of posts reaches 1000, the colored eggs may be unlocked. In FIG. 24, the number of posts in the discussion of current xx1 is 800, so the number progress bar is not completely check, shown as 800/1000. And, also through the text participation discussion, the CP exclusive feature video is unlocked to prompt the target objects to participate in the discussion so as to unlock the corresponding additional multimedia resources.
The target object may click on the "colored egg" button on the left of the "x" button (i.e., the close button) on the upper right corner of the discussion area, and may expand or hide the number progress bar (also referred to as the colored egg progress bar).
When it is determined that the total number of social dynamic content related to the current discussion group reaches a specified number, a resource link related to the additional multimedia resource may be displayed in a number progress bar.
That is, once the discussion post count reaches the standard, the interactive count display at the post count progress bar of the discussion will become a color egg link (i.e., a resource link that attaches multimedia resources), such as in the form of a clickable "watch immediately" button.
Optionally, the client may jump to a resource display interface related to the additional multimedia resource for playing in response to a triggering operation on the resource link.
Referring to fig. 25, which is a schematic diagram of a resource link in the embodiment of the present application, an "immediately watching" button shown in S251 in fig. 25 is a representation form of the resource link, and after the target object clicks "immediately watching", the CP materials such as the independent flower battle/business video of the CP can be unlocked, for example, the process jumps to the independent flower battle video interface (a resource display interface) shown in fig. 27 to play the independent flower battle video.
Optionally, after a certain period of time after the flower is unlocked, or after the target object clicks the "colored egg" button again as shown in S252, the number progress bar and the resource link may be hidden, and a corresponding view control is presented on the interactive interface; fig. 26 is a schematic diagram of a view control according to an embodiment of the present application, and the "view batting" button on the left of the "x" button on the upper right in fig. 26 is an example of a view control.
The target object can click the button, and then the client side responds to the triggering operation of the view control and jumps to the resource display interface related to the additional multimedia resource for playing.
The "view battle" button in fig. 26 is a representation form of a view control in the embodiment of the present application, and after the target object clicks "view battle", it jumps to the interface shown in fig. 27, which is a schematic diagram of a resource display interface in the embodiment of the present application, and a sole battle video is played in the sole battle video interface (i.e. resource display interface) in fig. 27.
In the embodiment, different additional multimedia resource viewing modes are set, so that the target object can quickly view the video of the flower, and the like, and the use experience of the target object is improved while the target object is not influenced to browse the discussion area.
It should be noted that, the video interaction method in the embodiment of the present application is mainly described from the perspective of the client, and the video interaction method in the embodiment of the present application is further described from the perspective of the server below:
referring to fig. 28, a flowchart of another implementation of a video interaction method according to an embodiment of the present application is shown, taking a server as an execution body as an example, and the implementation of the method is as follows:
s281: and after receiving the editing request aiming at the target content, the server returns at least one editing function to the client.
The target content is obtained by intercepting target video in a video playing interface through a client; each editing function is an editing function for a picture or audio related to a video object in a target video.
S282: the server acquires editing effects on the target content and stores release information for the target content with the editing effects.
The release information includes, but is not limited to, release content details, release account details, release time details, release platform details and the like.
The editing effect is obtained by the client-side performing editing processing of corresponding effects on at least one of picture information and audio information related to at least two video objects to be edited in the target content according to a specified editing function.
In the embodiment of the application, when the target object triggers the editing operation on the target content, the client can send an editing request to the server, the server responds to the request and invokes a corresponding editing function list, and meanwhile, the server also comprises information such as effect control details and the like corresponding to various editing functions and returns the information to the client, and further, the client can display various editing functions for the target object, such as displaying a two-creation interface shown in fig. 10, and displaying characters, amplifying, twisting, pressing, splicing, a material library and the like.
Further, after the target object selects the designated editing function, the related effect control can be further presented, and corresponding editing processing is performed on the target content according to the selection of the target object, so that corresponding editing effects are rendered.
In an alternative embodiment, the target object may post the target content with editing effect to the video forum in the form of social dynamic content, in the following manner:
The server may receive a discussion group selection request for the target content and return respective discussion group information associated with the target video to the client.
Wherein each discussion group contains at least two video objects associated with the target video.
In the embodiment of the application, the discussion group information specifically refers to information such as a video object and a corresponding topic label contained in a discussion group, before the target object needs to release target content, a discussion group selection request aiming at the target content can be sent to a server through a client, the server returns each discussion group information, the client can present an interface as shown in fig. 19, and discussion group information such as xx1, xx2, xx3, xx4 and the like is presented for the target object to select.
Then, the server receives a request for posting the target content to the designated discussion group, returns discussion area information associated with the designated discussion group to the client, so that the client posts the target content with editing effect in the form of social dynamic content to an interactive interface of the target video, and to a discussion area corresponding to the selected designated discussion group.
In the embodiment of the application, a target object selects a specific discussion group, such as xx1, and then clicks and issues, so that a request for issuing target content to the specific discussion group can be sent to a server through a client, and the server invokes discussion area information related to the discussion area, including but not limited to details of each post, the number of post interactions, post issuing time and the like, which take the specific discussion group as a topic label; after being returned to the client by the server, an interactive interface as shown in 19 can be presented, wherein the interface specifically displays the post related to the discussion group xx 1.
In this process, the selected ordering mode (corresponding to various content display modes) of the client side to the target object may also be sent to the server, for example, when the target object selects the "latest" ordering mode, the server may order the posts according to the time of posting the post related to xx1, specifically, send the ordered posts or post sequence to the client side for display.
Optionally, when the target object triggers the object interaction operation, the client may further send an object interaction request for the target video to the server, and further, the server receives the object interaction request for the target video, acquires detailed information of at least one key video segment included in the target video, and returns the acquired detailed information to the client.
For example, in the case that the target object starts the "CP" mode by triggering the object interaction operation, the client obtains video information of the currently played target video, such as a video name: xx drama episode 1, an object interaction request is sent to a server based on the video information. The server obtains the detail information of at least one key video segment contained in the related target video from the storage, for example, when the key video segment is a named scene segment, the named scene list can be returned, and the corresponding detail information comprises segment position information (for example, playing time, the position in the corresponding progress bar can be determined based on the playing time), named scene screenshot, the name of the corresponding discussion group CP, the display language and the like.
The client may highlight at least one key video snippet in the playing progress bar of the target video according to the playing time information in the detail information, where each key video snippet corresponds to at least one set of video objects, for example, as shown in fig. 6.
Further, the client may select a target key video segment based on the above detailed information, and display the second prompt information in the form of bubbles based on a scene screenshot, CP name, display language, etc. corresponding to the target key video segment, as shown in S80 in fig. 8. The display language is 'xxCP princess embracing name face'.
In addition, it should be noted that when the target key video snippet is associated with a plurality of discussion groups, a corresponding name scene screenshot, CP name, etc. of each discussion group and post details related to the discussion group may be returned, and the client side selects one of the discussion groups as the target discussion group based on the post details, and displays the corresponding name scene screenshot, CP name, etc. of the target discussion group. Or the server can also screen out the target discussion group based on the post details of each discussion group and feed back the name scene screenshot, the CP name and the like of the target discussion group associated with the target key video clip.
When in other scenes, the second prompt information can be displayed in the form of bubbles based on candidate screen shots (such as a classroom screen, a conference screen, a search warrant screen and the like), names and display languages of the video object groups and the like, which correspond to the expected interaction scene, in the target key video clip.
In the embodiment, the key video segments in the target video are identified on the playing progress bar, so that the target object can be effectively guided to open the discussion area for browsing and interaction.
Optionally, when the target object views the discussion area of a certain discussion group, a specific content display mode can be selected, such as the latest, popular, current and the like, and after selection, a sorting request is sent to the server through the client, wherein the sorting request comprises the content display mode corresponding to the current discussion area; the server receives the ordering request sent by the client, orders the social dynamic content related to the current discussion area according to the content display mode contained in the ordering request, and returns an ordering result to the client so that the client displays the related social dynamic content based on the ordering result in the current discussion area in the interactive interface.
As shown in fig. 20A, when the target object selects the "latest" mode, the content presentation mode carried by the sorting request sent to the server is "latest", and may also be represented by some fields (such as a model), for example, model=zx.
As shown in fig. 20B, when the target object selects the "hot" mode, the content presentation mode carried by the ordering request sent to the server is "hot", and may also be represented by a model field, for example, model=rm.
Similarly, the "current" mode may be expressed as: model=dq.
It should be noted that the above-mentioned methods are only simple examples, and any related ordering method is applicable to the embodiments of the present application, and will not be described in detail herein.
Optionally, when the number of posts related to a certain CP forum reaches the official specified number, the additional multimedia resource related to the CP may be unlocked, where:
the server can receive the interactive data sent by each client in real time aiming at the current discussion group and store the interactive data in real time; the current discussion group is a discussion group corresponding to a discussion area currently displayed by the discussion interface.
When the server determines that the number of the social dynamic contents related to the current discussion area reaches the specified number, the server can unlock the additional multimedia resources related to the current discussion group and feed back corresponding resource links to the client so that the client plays the additional multimedia resources based on the resource links.
As still shown in fig. 24 and 25, for example, when the server counts that the post number of the xx1 discussion group reaches 1000, the color eggs related to the current discussion group xx1 can be unlocked, and a corresponding resource link is fed back to the client, so that the client plays the additional multimedia resource based on the resource link.
In the above embodiment, the video interactivity may be further enriched by setting the unlocking of the additional multimedia resource.
Taking a CP-knocked scene as an example, the above-listed various video interaction methods will be briefly summarized.
Fig. 29A and 29B are general flow diagrams of a video interaction method according to an embodiment of the present application. Wherein, the front end may be a client and the back end may be a server.
In fig. 29A, the functions related to the "CP" button are mainly described, and are specifically divided into bubbling above the "CP" button; clicking a 'knock CP' button; color-changing marks of the progress bar of the scene; a nearest one of the named scene progress bars is bubble; clicking a screenshot button; selecting a filter, a sticker and a special effect for secondary wound; the CP is selected and a large portion of the post is posted.
The bubbling part above the 'CP-knocked' button is realized through the front end, for example, an image interchange Format (GRAPHICS INTERCHANGE Format, GIF) picture is added above the 'CP-knocked' button on the front end side. Such as bubble S70 shown in fig. 7.
Clicking the 'CP' button is realized through the front end, and the 'CP' button clicking event is monitored at the front end side.
The color-changing identification of the scene progress bar is realized through the interaction of the front end and the back end, and the back end is required to call an interface to return a scene list, including playing time, screenshot, CP name, display language and the like. Then, the playing progress bar of the front-end player changes color and marks according to the starting time of the scene.
The part of the bubble of the latest progress bar of the scene is realized by the front end, the front end queries the next scene in the scene list according to the current playing time and renders the bubble, and the bubble is realized by using the progress bar marking dotting component, as shown in S80 in fig. 8.
Clicking the screenshot button is achieved through the front end, and the front end monitors the screenshot button clicking event and invokes the player screenshot interface. And after the screenshot is completed, a second-creation popup window is called back.
The part of selecting the filter, the sticker and the special effect to perform the secondary wound is realized through the interaction of the front end and the back end, the front end acquires an effect list from the back end, and the effect list is displayed through a popup window, and the post content is rendered to a back end interface according to the selection of the target object; the back end needs to return to the effect list and store the target object secondary achievements.
Selecting a CP, wherein the part of posting is realized through interaction between a front end and a back end, the front end needs to monitor a post button clicking event, and submits post content to a back end interface; the back end needs to return to the CP list and newly add posts through the interface.
In the above process, the present application guides the target object to open the CP cracking function and display the CP name scene through a front-end User Interface (UI) style. The front end monitors the operation of a target object, invokes a screenshot interface of the player, and the target object can select preset stickers, filters and special effects for secondary creation. The second creation realizes rendering on the front end in a multithreading way through a third party special effect platform and WebAssembly (a web page low-level language is abbreviated as WASM) without blocking video playing. And after the second creation is finished, the CP role can be selected for posting and stored in a database.
Fig. 29B illustrates a specific procedure of newly-built CP forum and unlocking a colored egg, which is divided into: opening a CP discussion area; newly building a CP discussion area; click to view hotness/latest/current posts; click to view hotness/latest/current posts; comment/praise interaction; the number of posts in the current forum; unlocking the color eggs. Most of this needs to be achieved by front-end and back-end interactions.
Wherein the CP forum is opened: front end: the right drawer pops up; the "drawer" is an interactive interface floating layer, as shown in the discussion area part of fig. 20A and 20B. The rear end: the interface returns the CP list and the post list.
Newly built part of the CP forum: front end: monitoring a "+Add" button click event, and selecting a card by a CP role and a CP name input box; the rear end: and storing the newly-built CP name and role name.
Click to view the hot/up-to-date/current post section: front end: monitoring a content display mode button click event, sending a list acquisition request, and refreshing posts; the rear end: ordering according to the number of post praise, comment number and creation time. Returning hot stream, latest stream, current stream.
Comment/praise interaction: front end: monitoring an interaction button clicking event, displaying a comment editing box and modifying a praise state; the rear end: and storing and returning the updated post information.
The current discussion post count: front end: displaying the current post quantity and unlocking quantity progress bars; the rear end: the number of posts in the current CP forum is queried.
Unlocking the part of the colored egg: front end: switching post quantity display to a jump button; the rear end: and returning the corresponding CP colored egg links.
In summary, the CP forum may obtain posts for the corresponding CP in the database and sort and filter the posts in popularity (praise), latest (post time), and current (within one minute before the current time). And when a certain pair of CP discussion patches reach a preset value, displaying a color egg button.
Fig. 30A and 30B are diagrams showing an interaction timing chart of a video interaction method according to an embodiment of the application. The interactive time sequence diagram mainly relates to three parts of a target object side, a client side and a server side.
Fig. 30A mainly lists 1. A target object opens a CP-knocking mode and 2. A second implementation process of screenshot by the target object:
specifically, the target object starts the CP mode: 1) -6) the procedure is described as follows:
1) And when the client judges that the CP-knocking function is not started, the CP button displays bubbles.
Specifically, the client monitors "CP" button click events: if the target object does not start the 'CP knocked' mode, adding a GIF picture (such as a dynamic picture format) above the button, displaying bubbles, and guiding the target object to click the 'CP knocked' button; the section S70 shown in fig. 7 is a schematic diagram of bubbles corresponding to the first prompt information in the embodiment of the present application.
2) Clicking the CP button by the target object opens the CP mode.
3) The client acquires the video information and sends a request to the server.
4) And the server acquires the detailed information of all the scenes under the video according to the video data.
5) The server returns the details of the video title scene.
In particular, 2) to 5) represent: under the condition that the target object starts a 'CP' mode, the client sends a request to the server according to video information, for example, the client acquires video information (for example, video name: xx television play 1 st set) of the target video currently played, and sends an object interaction request to the server based on the video information. The server obtains the related name scene list from the storage, including play time, screenshot and CP name, showcase, etc., and returns.
6) The client playing progress bar has color-changing marks, and bubbles appear at the place of the name scene mark nearest to the current playing time.
As shown in fig. 8, S80 is a schematic representation of a second prompt message presented by a bubble in the present application, and the target key video segment is the last named scene segment of the current playing time.
And the client locates the next scene from the returned scene list according to the current playing time, and performs bubble rendering on the progress bar by using a mark dotting component.
The target object performs screenshot II creation: 7) -13) procedure is described as follows:
7) The target object clicks the screenshot button.
8) And the client intercepts the current picture and displays the two-creation interface.
The client monitors the clicking event of the screenshot button, and after the target object clicks the button, the current picture is intercepted by calling the screenshot interface of the player, and the second-creation interface is called back, and the second-creation interface is popped up.
Fig. 10 is a schematic diagram of a two-dimensional interface according to an embodiment of the present application, in which a plurality of editing functions are shown. Specifically, before the client displays the two-invasive interface, the client may also send an edit request to the server to acquire the edit function returned by the server, so as to present the two-invasive interface as shown in fig. 10 (this process is not shown in fig. 30A).
9) The target object selects the designated editing function to perform corresponding operation.
10 The client renders the corresponding effect according to the target object operation.
Specifically, the target object may select a specified editing function, and further, based on an effect control related to the specified editing function, the screenshot is edited correspondingly, and the corresponding effect is rendered as shown in fig. 11A-18B.
11 The target object confirms the final result.
Specifically, according to the designated editing function selected by the target object, after the client renders the corresponding effect, the client can return to the two-creation interface, as shown in the upper interface in fig. 19, and the target object can trigger the client to initiate a request to the server through a release button in the two-creation interface; or a confirmation interface can be popped up, after the target object is confirmed, the client initiates a request to the server, acquires and displays all CP discussion area information of the target video, and the steps 12) to 13 are as follows.
12A client initiates a request to a server.
13 The server returns all CP forum information for the target video upon request.
Specifically, the client initiates a request to the server, wherein the request can carry the video information of the current target video, and the server acquires all CP discussion area information of the target video according to the video information.
Fig. 30B mainly illustrates 3. New discussion area created by the target object and 4. One implementation process of the target object operating in the discussion area:
Specifically, a target object creates a discussion area: 14 -20) the procedure is described as follows:
14 The client displays all of the discussion area information.
15 The target object clicks on the newly created forum.
16 The client calls up the new discussion menu.
The client monitors the target object "new forum" button (e.g. "+Add" in FIG. 20B) for a click event, and when the button is touched, invokes a "new forum" related submenu, as shown in FIG. 21, in which some selectable roles are listed.
17 The target object names the discussion area and selects two roles in the CP.
Specifically, the target object performs operations of naming the forum, selecting CP roles, and the like according to the prompt, for example, selecting 2 CP components from the actions to form a new CP, and naming the new CP as "sweet" so as to add the forum "sweet".
18 The client transmits the newly added discussion area information to the server.
19 The server stores the newly added discussion area information.
20 The server returns all the discussion zones.
Under the condition that the new discussion area can be successfully established, the client can transmit the information of discussion group details, names and the like corresponding to the newly added discussion area to the server, and the server stores the information to a specific discussion area table and returns all the discussion areas for the client to display, as shown in fig. 22.
The target object operates in the discussion area: 21 -32) flow is described as follows:
21 A target object selects a locked CP forum.
22 The client requests the server for the specified forum information.
23 The server returns the specified discussion area information.
24 The client arranges the forum information.
25 A target object selects a discussion area ordering manner.
26 The client initiates an ordering request to the server.
27 The server returns the results after the ranking.
28 The client refreshes the display.
Specifically, 21) -28) represent the discussion area selected by the client monitoring target object, send the related details of the posts for obtaining the appointed discussion area to the server, the server returns after ordering according to a specific rule, and the client displays in sequence.
Firstly, the client requests the specified forum information from the server, and after the server returns the specified forum information, the forum information is arranged by the client, at this time, the display modes corresponding to the forum are divided into a hot display mode, a latest display mode and a current display mode, and the client side can be arranged according to the hot display mode (or according to any one of the latest display mode, the current display mode and the like).
Further, if the target object selects the discussion area ordering mode and is switched from hot to up-to-date, an ordering request can be initiated to the server, and after the server returns the result after the ordering, the client side can refresh and display.
29 The target object performs operations such as content posting, commenting, praise, etc. in the discussion area.
30 The client transmits the relevant operation data to the server.
31 The server stores the operation data of the target object.
32 The section post reaches the specified data to unlock the colored egg.
Specifically, 29) -32) indicates that the client monitors the comments, praise, change ordering (latest, hot, current) and other operations of the target object in the discussion area, and the related operations are transmitted to the server for storage. When the posts in a certain CP discussion area reach the specified number, the colored eggs are unlocked. Through the discussion area post quantity returned by the server, the client visualizes the post quantity into a quantity progress bar of the post quantity and the unlocking target quantity, and when the unlocking condition is reached, the post quantity icon is switched to a skip button to return to the colored egg link, as shown in fig. 24 and 25.
It should be noted that, the processes and the interaction sequences illustrated in the several figures of fig. 29A to 30B are only simple examples, and any related embodiments are applicable to the embodiments of the present application, and are not described herein in detail.
In general, the application provides the target object with the cracking CP requirement with experience different from that of other long video platforms, so that the target object can crack the CP together with other target objects while watching the video, the cracking CP is not needed after the video is watched, and the satisfaction is not needed to be delayed. In addition, the added function of the application supports the target object to complete the whole process of cracking the CP in the video platform, and promotes the social experience of the target object when watching the video. The core usage scene of the technical scheme of cracking the CP is different from the characteristics of other social scenes carried out on a video platform.
In addition, through interviews and investigation of target objects, the target objects do not mind the experience of interrupting video watching to crack the CPs, and long video audiences with the requirements of cracking the CPs can actively stop to crack other platforms to perform activities related to cracking the CPs when watching dramas. Therefore, the access of the CP related two-creation function to the video platform can be inferred, and the utilization rate of the CP related two-creation function can be higher than that of other common socialization functions. In addition, the technical scheme has advantages in terms of copyright. The target object releases edited contents on the copyrighted long video platform, so that infringement risks possibly related to the second creation are greatly reduced. The large number of copyrights owned by video platforms supports more free authoring and expression of target objects.
Based on the same inventive concept, the embodiment of the application also provides a video interaction device. As shown in fig. 31, which is a schematic structural diagram of the video interaction device 3100, may include:
The intercepting unit 3101 is configured to respond to an intercepting operation of a target video in the video playing interface, and display intercepted target content in the video playing interface;
An editing unit 3102 for presenting an editing effect on the target content in response to an editing operation on the target content; the editing effect is obtained by editing at least one of picture information and audio information related to at least two video objects to be edited in the target content according to a specified editing function, and the editing process is a process aiming at the interaction state of the at least two video objects to be edited, so that the interaction state of the at least two video objects to be edited after editing accords with an expected interaction scene;
The sharing unit 3103 is configured to share the target content with the editing effect in response to a sharing operation of the target content.
Optionally, the sharing unit 3103 is specifically configured to:
responding to discussion group selection and sharing operation of target content, publishing the target content with editing effect to an interactive interface of a target video in the form of social dynamic content, and publishing to a discussion area corresponding to the selected appointed discussion group; each discussion group contains at least two video objects associated with the target video.
Optionally, the sharing unit 3103 is specifically configured to:
In the discussion area, clustering and displaying target content and other content related to the target content, and sequentially presenting viewpoint information corresponding to each content in a viewpoint area; the other content is other social dynamic content with similarity to the target content exceeding a preset threshold, and the display sequence of the viewpoint information corresponding to each content is associated with the display sequence of the target content and the other content.
Optionally, the sharing unit 3103 is specifically configured to:
Responding to the sharing operation of the target content, and sharing the target content with the editing effect to a third-party social platform;
The apparatus further comprises:
The jump unit 3104 is configured to, in response to a trigger operation of viewing, by the third-party social platform, the target video corresponding to the target content, jump to the video playing interface, and restore, when the target content continues to be played in the video playing interface, to the original video frame by the corresponding editing effect.
Optionally, the editing unit 3102 specifically is configured to:
presenting at least one editing function in response to an editing operation on the target content;
Presenting at least one effect control corresponding to a specified editing function in response to a selection operation of the specified editing function;
And after executing editing processing on the corresponding part in the target content based on the specified effect control, each time the editing operation triggered based on one specified effect control in the at least one effect control is responded, the corresponding editing effect is presented.
Optionally, the target content is a screenshot, and the appointed editing function is an editing function aiming at an appointed part of the video object; the editing unit 3102 specifically functions to:
Identifying at least two video objects to be edited in the target content in response to editing operation triggered by the finishing effect control in the at least one effect control, moving at least one of the at least two video objects to be edited according to the relevant part based on the finishing effect control, and presenting the editing effect of the finishing edition; or alternatively
And identifying at least two video objects to be edited in the target content in response to the editing operation triggered by the smiling effect control in the at least one effect control, connecting relevant parts in the video objects to be edited based on the smiling effect control, and presenting the editing effect of the smiling edition.
Optionally, the apparatus further comprises:
the first response unit 3105 is configured to highlight at least one key video piece in a playback progress bar of the target video in response to an object interaction operation with respect to the target video.
Optionally, the target content is a screenshot, and the appointed editing function is an editing function aiming at an appointed part of the video object;
if the first response unit 3105 responds to the object interaction operation before the editing unit 3102 responds to the editing operation on the target content, the editing unit 3102 is specifically configured to:
Responding to the editing operation of the target content, identifying at least two video objects to be edited in the target content, moving at least one of the at least two video objects to be edited according to the relevant parts, and presenting the editing effect of the fine modification; or alternatively
And in response to the editing operation of the target content, identifying at least two video objects to be edited in the target content, connecting relevant parts in the at least two video objects to be edited, and presenting the editing effect of the fuzzing edition.
Optionally, the editing unit 3102 specifically is configured to:
And moving at least one video object to be edited, adjusting the distance between target areas in the relevant parts of each video object to be edited to a target distance range, and displaying the editing effect of the fine-repair edition.
Optionally, the editing unit 3102 specifically is configured to:
and twisting the target area in the relevant part of at least one video object to be edited so as to be connected with the relevant parts of other video objects to be edited, and presenting the editing effect of the fuzzing edition.
Optionally, the apparatus further comprises:
A first prompting unit 3106, configured to present an object interaction control in the video playing interface;
And if the target object is determined to be in a certain period of time for watching the target video, the object interaction operation is not triggered based on the object interaction control, corresponding first prompt information is presented in the video playing interface, and the first prompt information is used for guiding the target object to execute the object interaction operation so as to view the key video fragments related to each discussion group in the target video.
Optionally, the apparatus further comprises:
the second prompting unit 3107 is configured to present, in the relevant position of the target key video snippet in the playing progress bar, second prompting information, where the second prompting information is used to guide the target object to view a discussion area corresponding to the target key video snippet, and the target key video snippet is selected from at least one key video snippet based on a current playing time point of the target video and respective playing time of at least one key video snippet.
Optionally, the second prompt information further includes a jump control; the second prompting unit 3107 is also configured to:
Responding to the triggering operation of the jump control, presenting an interactive interface of the target video, and displaying a discussion area corresponding to the target key video fragment in the interactive interface; wherein, the discussion area corresponding to the target key video snippet is used for: and displaying the social dynamic content corresponding to the target discussion group associated with the target key video snippet.
Optionally, the apparatus further comprises:
A second response unit 3108, configured to highlight, in the video playback interface, a discussion control for jumping to the interactive interface;
Responding to the triggering operation of the discussion control, presenting an interactive interface of the target video, and displaying a discussion area corresponding to the target key video fragment in the interactive interface; wherein, the discussion area corresponding to the target key video snippet is used for: and displaying the social dynamic content corresponding to the target discussion group associated with the target key video snippet.
Optionally, each key video snippet is associated with at least one discussion group; the apparatus further comprises:
a determining unit 3109 configured to determine the target discussion group by:
If the target key video snippet is associated with only one discussion group, taking the discussion group as a target discussion group;
If the target key video snippet is associated with a plurality of discussion groups, selecting one from the plurality of discussion groups as a target discussion group based on the interaction quantity of the social dynamic content corresponding to each discussion group.
Optionally, the apparatus further comprises:
a creating unit 3110 for presenting a plurality of video objects related to the target video in response to a forum creating operation triggered through the interactive interface;
and responding to the selection of at least two target video objects in the plurality of video objects and the discussion group naming operation, and displaying a new discussion area corresponding to a discussion group formed by the at least two target video objects in the interactive interface.
Optionally, the creating unit 3110 is further configured to:
Before a new discussion zone corresponding to a discussion group formed by at least two target video objects is displayed in the interactive interface, if the fact that the discussion group corresponding to the selected at least two target video objects exists is determined, a corresponding creation failure prompt is displayed in the video playing interface.
Optionally, the interactive interface includes labels corresponding to each discussion group; the apparatus further comprises:
a switching unit 3111, configured to switch to a discussion area of a discussion group corresponding to the selected target tag in response to a tag selection operation triggered at the interactive interface;
Wherein each discussion zone includes at least one of the following content presentation modes:
according to the posting time of the social dynamic content, displaying all social dynamic content which is relevant to the discourse and aims at the target video;
displaying all social dynamic contents which are related to the discourse and aim at the target video according to the interaction quantity of the social dynamic contents;
And displaying the social dynamic content which is related to the discussion area and is released in a set time period before and after the current playing time point of the target video according to the release time or the interaction quantity of the social dynamic content.
Optionally, the apparatus further comprises:
A third response unit 3112, configured to unlock the additional multimedia resources related to the current discussion group after determining that the total number of social dynamic content related to the current discussion group reaches a specified number; the current discussion group is a discussion group corresponding to a discussion area currently displayed by the discussion interface;
And playing the additional multimedia resources in response to the viewing operation of the additional multimedia resources.
Optionally, the third response unit 3112 is further configured to:
displaying a quantity progress bar corresponding to the total quantity of the social dynamic contents related to the current discussion group in an interactive interface, wherein the total length of the quantity progress bar is determined based on the designated quantity;
the third response unit 3112 is specifically configured to:
when the total quantity of the social dynamic contents related to the current discussion group reaches the specified quantity, displaying resource links related to the additional multimedia resources in a quantity progress bar;
And responding to the triggering operation of the resource link, and jumping to a resource display interface related to the additional multimedia resource for playing.
Optionally, the third response unit 3112 is further configured to:
After the resource links related to the additional multimedia resources are displayed in the quantity progress bar, hiding the quantity progress bar and the resource links, and presenting corresponding view controls on the interactive interface;
And responding to the triggering operation of the view control, and jumping to a resource display interface related to the additional multimedia resource for playing.
Based on the same inventive concept, the embodiment of the application also provides another video interaction device. As shown in fig. 32, which is a schematic structural diagram of a video interaction device 3200, may include:
a feedback unit 3201, configured to return at least one editing function to the client after receiving an editing request for the target content; the target content is obtained by intercepting target video in a video playing interface through a client;
a storage unit 3202 for acquiring an editing effect on the target content and storing release information for the target content with the editing effect; the editing effect is obtained by editing at least one of picture information and audio information related to at least two video objects to be edited in the target content according to a specified editing function, wherein the editing process is a process aiming at the interaction state of the at least two video objects to be edited, so that the interaction state of the at least two video objects to be edited after editing accords with an expected interaction scene.
Optionally, the feedback unit 3201 is further configured to:
Receiving a discussion group selection request aiming at target content, and returning information of each discussion group associated with target video to a client, wherein each discussion group comprises at least two video objects related to the target video;
receiving a request for posting the target content to the appointed discussion group, returning discussion area information associated with the appointed discussion group to the client, so that the client can post the target content with editing effect to an interactive interface of the target video in the form of social dynamic content and to a discussion area corresponding to the selected appointed discussion group.
Optionally, the feedback unit 3201 is further configured to:
Receiving an object interaction request aiming at a target video, wherein the object interaction request is sent by a client in response to an object interaction operation aiming at the target video;
and acquiring the detail information of at least one key video segment contained in the target video, and returning the acquired detail information to the client so that the client highlights the at least one key video segment in the playing progress bar of the target video according to the detail information, wherein each key video segment corresponds to at least one group of video objects.
Optionally, the feedback unit 3201 is further configured to:
Receiving a sorting request sent by a client, wherein the sorting request comprises a content display mode corresponding to a current discussion area; the current discussion area is the discussion area currently displayed by the discussion interface;
according to the content display mode contained in the ordering request, ordering the social dynamic content related to the current discussion area;
And returning the sequencing result to the client so that the client displays the relevant social dynamic content based on the sequencing result in the current discussion area in the interactive interface.
Optionally, the feedback unit 3201 is further configured to:
Receiving and storing the interactive data aiming at the current discussion group and sent by a client; the current discussion group is a discussion group corresponding to a discussion area currently displayed by the discussion interface;
After the number of the social dynamic contents related to the current discussion area reaches the designated number, unlocking the additional multimedia resources related to the current discussion group, and feeding back corresponding resource links to the client side so that the client side plays the additional multimedia resources based on the resource links.
In the application, the target content cut off at the video playing interface of the client is not directly released to the video discussion area, but can be subjected to secondary creation by adopting a specified editing function, such as picture information and/or audio information contained in the target content can be subjected to treatment with specific effect, and the process can be directly subjected to secondary creation in the process of watching the video by a target object without a third party creation platform, so that the operation is simple; when watching a video, if the target object wants to develop interaction for the video object in the target object, the target content can be intercepted according to the self requirement, the intercepted target content is edited based on a specified editing function, the editing is targeted, and the interaction state of at least two video objects to be edited in the target content is mainly adjusted so as to enable the video objects to conform to an expected interaction scene, and the development interaction between the follow-up objects is facilitated for the video objects to be edited, the expected interaction scene and the like; based on the method, after the target content is created for the second time, the target content with a certain editing effect can be shared, so that the interactive content among objects is enriched, the content quality is improved, and the combination of video and object interaction is optimized.
In addition, the application further provides that after the target content is authored for the second time, the target content with a certain editing effect can be published to the video discussion area in the form of social dynamic content, in the process, for convenience of discussion, the video object discussion group which is required to be bound with the target content is required to be selected before the target content is published, and then the target content is published to the video discussion area of the appointed discussion group directly, and the social dynamic content in the video discussion area is closely related to the appointed discussion group, so that discussion interaction between target objects is facilitated, and video interactivity is enriched while the interactive combination of video and objects is optimized.
For convenience of description, the above parts are described as being functionally divided into modules (or units) respectively. Of course, the functions of each module (or unit) may be implemented in the same piece or pieces of software or hardware when implementing the present application.
Having described the video interaction method and apparatus of an exemplary embodiment of the present application, next, an electronic device according to another exemplary embodiment of the present application is described.
Those skilled in the art will appreciate that the various aspects of the application may be implemented as a system, method, or program product. Accordingly, aspects of the application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
The embodiment of the application also provides electronic equipment based on the same conception as the embodiment of the method. In one embodiment, the electronic device may be a server, such as server 220 shown in FIG. 2. In this embodiment, the electronic device may be configured as shown in fig. 33, including a memory 3301, a communication module 3303, and one or more processors 3302.
The memory 3301 is used for storing a computer program executed by the processor 3302. The memory 3301 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, a program required for running an instant communication function, and the like; the storage data area can store various instant messaging information, operation instruction sets and the like.
The memory 3301 may be a volatile memory (RAM) such as a random-access memory (RAM); the memory 3301 may also be a nonvolatile memory (non-volatile memory), such as a read-only memory, a flash memory (flash memory), a hard disk (HARD DISK DRIVE, HDD) or a solid state disk (solid-state drive (SSD); or memory 3301, is any other medium that can be used to carry or store a desired computer program in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. Memory 3301 may be a combination of the above.
The processor 3302 may include one or more central processing units (central processing unit, CPUs) or digital processing units, or the like. The processor 3302 is configured to implement the video interaction method when calling the computer program stored in the memory 3301.
The communication module 3303 is used for communicating with terminal devices and other servers.
The specific connection medium between the memory 3301, the communication module 3303, and the processor 3302 is not limited in the embodiment of the present application. In the embodiment of the present application, the memory 3301 and the processor 3302 are connected through the bus 3304 in fig. 33, the bus 3304 is depicted in a thick line in fig. 33, and the connection manner between other components is only schematically illustrated, but not limited thereto. The bus 3304 may be classified into an address bus, a data bus, a control bus, and the like. For ease of description, only one thick line is depicted in fig. 33, but only one bus or one type of bus is not depicted.
The memory 3301 stores therein a computer storage medium in which computer executable instructions for implementing the video interaction method of the embodiment of the present application are stored. The processor 3302 is configured to perform the video interaction method described above, as shown in fig. 2.
In another embodiment, the electronic device may also be other electronic devices, such as the terminal device 210 shown in fig. 2. In this embodiment, the structure of the electronic device may include, as shown in fig. 34: communication component 3410, memory 3420, display unit 3430, camera 3440, sensor 3450, audio circuit 3460, bluetooth module 3470, processor 3480, etc.
The communication component 3410 is for communicating with a server. In some embodiments, a circuit wireless fidelity (WIRELESS FIDELITY, WIFI) module may be included, the WiFi module belongs to a short-range wireless transmission technology, and the electronic device may help the user to send and receive information through the WiFi module.
Memory 3420 may be used to store software programs and data. The processor 3480 executes various functions of the terminal device 210 and data processing by executing software programs or data stored in the memory 3420. The memory 3420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. The memory 3420 stores an operating system that enables the terminal device 210 to operate. The memory 3420 of the present application may store an operating system and various application programs, and may also store a computer program for executing the video interaction method of the embodiment of the present application.
The display unit 3430 may also be used to display information input by a user or information provided to the user and a graphical user interface (GRAPHICAL USER INTERFACE, GUI) of various menus of the terminal device 210. Specifically, the display unit 3430 may include a display screen 3432 provided on the front surface of the terminal device 210. The display 3432 may be configured in the form of a liquid crystal display, light emitting diodes, or the like. The display unit 3430 may be configured to display any interface in the embodiments of the present application, such as a video playing interface, an interactive interface, a resource display interface, and the like.
The display unit 3430 may also be used to receive input numeric or character information, generate signal inputs related to user settings and function control of the terminal device 210, and in particular, the display unit 3430 may include a touch screen 3431 provided on the front surface of the terminal device 210, and may collect touch operations on or near the user, such as clicking buttons, dragging scroll boxes, and the like.
The touch screen 3431 may cover the display screen 3432, or the touch screen 3431 may be integrated with the display screen 3432 to implement the input and output functions of the terminal device 210, and after integration, the touch screen may be simply referred to as a touch screen. The display unit 3430 of the present application may display application programs and corresponding operation steps.
The camera 3440 may be used to capture still images, and a user may post images captured by the camera 3440 through an application. The camera 3440 may be one or a plurality of cameras. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the processor 3480 for conversion into a digital image signal.
The terminal device may further comprise at least one sensor 3450, such as an acceleration sensor 3451, a distance sensor 3452, a fingerprint sensor 3453, a temperature sensor 3454. The terminal device may also be configured with other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, light sensors, motion sensors, and the like.
Audio circuitry 3460, speaker 3461, microphone 3462 may provide an audio interface between the user and terminal device 210. The audio circuit 3460 may transmit the received electrical signal converted from audio data to the speaker 3461, and the electrical signal is converted into a sound signal by the speaker 3461 and output. The terminal device 210 may also be configured with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 3462 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after receiving by the audio circuit 3460, and outputs the audio data to the communication component 3410 to be transmitted to, for example, another terminal device 210, or outputs the audio data to the memory 3420 for further processing.
The bluetooth module 3470 is used for exchanging information with other bluetooth devices having bluetooth modules through bluetooth protocol. For example, the terminal device may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) that also has a bluetooth module through the bluetooth module 3470, thereby performing data interaction.
The processor 3480 is a control center of the terminal device, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs stored in the memory 3420, and calling data stored in the memory 3420. In some embodiments, the processor 3480 may include one or more processing units; the processor 3480 may also integrate an application processor that primarily processes operating systems, user interfaces, applications, etc., and a baseband processor that primarily processes wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 3480. The processor 3480 of the present application may run an operating system, applications, user interface displays and touch responses, as well as the video interaction method of the embodiments of the present application. In addition, the processor 3480 is coupled to the display unit 3430.
In some possible embodiments, aspects of the video interaction method provided by the present application may also be implemented as a program product comprising a computer program for causing an electronic device to perform the steps of the video interaction method according to the various exemplary embodiments of the application described herein above when the program product is run on the electronic device, e.g. the electronic device may perform the steps as shown in fig. 3 or fig. 28.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product of embodiments of the present application may take the form of a portable compact disc read only memory (CD-ROM) and comprise a computer program and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with a command execution system, apparatus, or device.
The readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave in which a readable computer program is embodied. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with a command execution system, apparatus, or device.
A computer program embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer programs for performing the operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer program may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic device may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., connected through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the elements described above may be embodied in one element in accordance with embodiments of the present application. Conversely, the features and functions of one unit described above may be further divided into a plurality of units to be embodied.
Furthermore, although the operations of the methods of the present application are depicted in the drawings in a particular order, this is not required or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having a computer-usable computer program embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program commands may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the commands executed by the processor of the computer or other programmable data processing apparatus produce means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program commands may also be stored in a computer readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the commands stored in the computer readable memory produce an article of manufacture including command means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (25)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310096276.3A CN118368464A (en) | 2023-01-17 | 2023-01-17 | Video interaction method, device, electronic device and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310096276.3A CN118368464A (en) | 2023-01-17 | 2023-01-17 | Video interaction method, device, electronic device and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN118368464A true CN118368464A (en) | 2024-07-19 |
Family
ID=91878965
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310096276.3A Pending CN118368464A (en) | 2023-01-17 | 2023-01-17 | Video interaction method, device, electronic device and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118368464A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119225606A (en) * | 2024-10-31 | 2024-12-31 | 北京达佳互联信息技术有限公司 | Content generation method, device, electronic device and storage medium |
-
2023
- 2023-01-17 CN CN202310096276.3A patent/CN118368464A/en active Pending
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119225606A (en) * | 2024-10-31 | 2024-12-31 | 北京达佳互联信息技术有限公司 | Content generation method, device, electronic device and storage medium |
| CN119225606B (en) * | 2024-10-31 | 2025-09-09 | 北京达佳互联信息技术有限公司 | Content generation method, device, electronic equipment and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20220150572A1 (en) | Live video streaming services | |
| US11323753B2 (en) | Live video classification and preview selection | |
| US10623783B2 (en) | Targeted content during media downtimes | |
| US10142681B2 (en) | Sharing television and video programming through social networking | |
| TWI581128B (en) | Method, system, and computer-readable storage memory for controlling a media program based on a media reaction | |
| US11343595B2 (en) | User interface elements for content selection in media narrative presentation | |
| US20140188997A1 (en) | Creating and Sharing Inline Media Commentary Within a Network | |
| KR20180020203A (en) | Streaming media presentation system | |
| Smith | Motion comics: the emergence of a hybrid medium | |
| CN103530301A (en) | System and method for establishing virtual community | |
| WO2014097814A1 (en) | Display device, input device, information presentation device, program and recording medium | |
| CN116962784A (en) | A video playback method, device and electronic equipment | |
| CN108737903B (en) | Multimedia processing system and multimedia processing method | |
| CN115499672B (en) | Image display method, device, equipment and storage medium | |
| CN118368464A (en) | Video interaction method, device, electronic device and storage medium | |
| WO2023130715A1 (en) | Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product | |
| US20240137599A1 (en) | Terminal and non-transitory computer-readable medium | |
| CN119484453A (en) | Chat message reading method, device, electronic device and storage medium | |
| JP2024130087A (en) | Systems, programs, etc. | |
| US20260025533A1 (en) | Server, terminal, and method | |
| EP3316204A1 (en) | Targeted content during media downtimes | |
| CN107005871A (en) | Systems and methods for presenting content | |
| CN120166261A (en) | A video application method and system | |
| CN121099114A (en) | Live streaming recommendation methods, devices, equipment, and media | |
| CN119135999A (en) | Content processing method, device, electronic device and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |