Detailed Description
In order to make the objects, technical solutions and advantageous effects of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
For ease of understanding, the terms involved in the embodiments of the present invention are explained below.
A Client (Client) refers to a program corresponding to a server, which provides a local service to a Client. Except for some applications that only run locally, they are typically installed on a common client and need to run in conjunction with the server. After the development of the internet, more commonly used clients include web browsers such as those used by the world wide web, email clients when receiving and sending email, and client software for instant messaging. For this type of application, there is a need for a corresponding server and service program in the network to provide corresponding services, such as database service, email service, etc., so that a specific communication connection needs to be established between the client and the server to ensure the normal operation of the application. In the embodiment of the application, the client may include clients of various video applications, such as clients corresponding to applications, such as live broadcast applications, short video applications, video media applications, video conference applications, and the like.
Live video broadcast refers to a technology of collecting data of a broadcasting party through a certain device, compressing the data into a video stream which can be watched and transmitted through a series of processes such as video coding, and outputting the video stream to a client of a spectator.
The live broadcasting room is a virtual space (or virtual room), the live broadcasting room interface can be displayed by the anchor client and the audience client, the audience can watch live broadcasting contents of the anchor through the live broadcasting room interface displayed by the audience client, voice or text interaction can be carried out with the anchor, and the anchor can display the live broadcasting contents through the live broadcasting room interface of the anchor client.
The following describes the design concept of the embodiment of the present application.
With the development of internet technology and intelligent devices, video applications are increasingly favored by users. The video is combined with various elements such as images, characters, sounds and the like, so that better watching experience and interaction experience are brought to users, and the video becomes a mainstream expression mode of the Internet gradually.
However, in some video playing forms, the display interface only adopts a partial area to display the video picture, and other areas are blank. Moreover, the user is sometimes interested in only a certain person or element in the video picture, and the existence of other people also takes up screen space, so that the space of the display interface is wasted, the video picture is too small, and inconvenience is brought to the video watching process.
In view of this, in an embodiment of the present application, a video playing method includes:
and displaying the video picture in a first area in the display interface. At least one video content is intercepted from the video picture in response to the intercepting operation for the video picture, and then the at least one video content is displayed in a second area in the display interface after being enlarged.
In the embodiment of the application, the video picture is displayed in the first area of the display interface, at least one video content is intercepted from the video picture in response to the intercepting operation aiming at the video picture, the video content of interest to the user is obtained, and then the amplified at least one video content is displayed in the second area of the display interface, so that the blank area in the display interface is well utilized, the waste of display space is avoided, the user can watch the amplified and interested video content, the problem of inconvenience of watching the video content by the user caused by the too small video picture is solved, and the video watching experience of the user is further improved.
In practical application, the video playing method in the embodiment of the application can be applied to any video playing scene such as a live broadcast scene, a short video playing scene, a video on demand scene, a video conference scene and the like.
Referring to fig. 1, a system architecture diagram applicable to an embodiment of the present application includes at least a terminal device 101 and a server 102.
The terminal device 101 is pre-installed with a video application, wherein the video application includes a client application, a web page application, an applet application, and the like. The video application may specifically be a live broadcast application, a short video application, a video media application, a video conference application, etc., and the terminal device 101 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a smart car device, etc., but is not limited thereto.
Server 102 is a background server for video applications. The server 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligent platform. The terminal device 101 and the server 102 may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
The video playing method in the embodiment of the application can be executed by the terminal equipment 101 or can be executed interactively by the terminal equipment 101 and the server 102.
The video playing method in the embodiment of the present application executed by the terminal device 101 includes the following steps:
The terminal device 101 presents a video picture in a first area in the display interface. The terminal device 101 intercepts at least one video content from the video picture in response to the intercept operation for the video picture, and then presents the enlarged at least one video content in a second area in the display interface. The video playing method in the embodiment of the application not only well utilizes the blank area in the display interface and avoids the waste of the display space, but also enables the user to watch the amplified and interested video content, solves the problem of inconvenient watching of the video content by the user caused by too small video pictures, and further improves the video watching experience of the user.
Based on the system architecture diagram shown in fig. 1, an embodiment of the present application provides a flow of a video playing method, as shown in fig. 2, where the flow of the method is performed by a computer device, and the computer device may be the terminal device 101 shown in fig. 1, and includes the following steps:
Step S201, a video picture is displayed in a first area in a display interface.
Specifically, the display interface may be a horizontal screen display interface or a vertical screen display interface, where the horizontal screen display interface is a display interface displayed when the terminal device is in a horizontal screen state, and the vertical screen display interface is a display interface displayed when the terminal device is in a vertical screen state.
And responding to the triggering operation, and displaying the video picture in a first area in the display interface, wherein the triggering operation can be an operation of switching from a horizontal screen display interface to a vertical screen display interface, an operation of switching from the vertical screen display interface to the horizontal screen display interface, an operation of clicking or double clicking a video identifier and the like.
The video frames at least comprise a horizontal screen video frame and a vertical screen video frame, wherein the horizontal screen video frame is a video frame which is displayed in a full screen mode when the terminal equipment is in a horizontal screen state, and the vertical screen video frame is a video frame which is displayed in a full screen mode when the terminal equipment is in a vertical screen state. The display interface and video pictures are also different for different video applications. For example, in a live broadcast application, the display interface is a live broadcast room interface and the video frames are live broadcast frames. In short video applications, the display interface is a short video play interface and the video frames are short video frames.
The first area in the display interface is a partial area in the display interface, and the position, the shape and the size of the first area can be set according to actual conditions. The first area may be used to display content such as a bullet screen, special effects, and the like, in addition to displaying video pictures.
When the first region in the display interface displays the video picture, the display form of displaying the video picture in a partial region in the display interface may be arbitrarily selected.
For example, referring to fig. 3a, a schematic diagram of a horizontal screen display interface according to an embodiment of the present application is shown, where a horizontal screen live broadcast picture and a barrage sent by a viewer are displayed.
In response to an operation of switching from a horizontal screen display interface to a vertical screen display interface, a vertical screen display interface is displayed, and referring to fig. 3b, a schematic diagram of a vertical screen display interface provided for an embodiment of the present application is shown, where the vertical screen display interface includes a first area 301, and the first area 301 includes a live broadcast picture after switching and a bullet screen sent by a viewer.
Referring to fig. 4a, a schematic diagram of a vertical screen display interface according to an embodiment of the present application is shown, where a vertical screen live broadcast picture, a bullet screen sent by a viewer, and a virtual gift special effect are displayed on the vertical screen display interface.
In response to an operation of switching from the vertical screen display interface to the horizontal screen display interface, the horizontal screen display interface is displayed, and referring to fig. 4b, a schematic diagram of the horizontal screen display interface provided for an embodiment of the present application is shown, where the horizontal screen display interface includes a first area 401, and the first area 401 includes a vertical screen live broadcast picture after switching, a bullet screen sent by a viewer, and a virtual gift special effect.
For example, in response to an operation of clicking a video play button, a cross-screen display interface is shown, and referring to fig. 4c, a schematic diagram of a cross-screen display interface is provided for an embodiment of the present application, where the cross-screen display interface includes a first area 402, and the first area 402 includes a live broadcast picture and a bullet screen sent by a viewer.
Step S202, in response to the capture operation for the video picture, capturing at least one video content from the video picture.
Specifically, the capture operation for the video picture may be one or more of a slide operation, a click operation, a double click operation, a long press operation, and the like. The video content may be content within a specified range in the video picture, or may be an object in the video picture, and the object may specifically be a person or an object. In a specific application scenario, the video content may be live content, short video content, video conference content, etc.
Step S203, displaying the at least one amplified video content in a second area in the display interface.
Specifically, the second area in the display interface is a partial area in the display interface, and the position, shape and size of the second area can be set according to actual situations. The second area may be a blank area, and may also be used to display content such as comment information, bullet screen information, and special effects, which are not in the video frame.
After capturing at least one video content from the video picture, the at least one video content may be displayed at a preset position in the second area after being amplified according to a preset rule. After the amplifying operation or the moving operation of the user is monitored, the amplifying processing is performed on the at least one video content based on the amplifying operation of the user, and then the amplified at least one video content is moved to the corresponding position of the second area according to the monitored moving operation.
For example, referring to fig. 5, a schematic diagram of a vertical screen display interface according to an embodiment of the present application is provided, where the vertical screen display interface includes a first area 501 and a second area 502, the first area 501 includes a live broadcast picture and a bullet screen sent by a viewer, and the second area 502 is a blank area, and the vertical screen display interface further includes a "cut" button.
The audience clicks the 'cut' button and intercepts the live broadcast picture. The terminal device intercepts live content 503 and live content 504 from the live picture in response to the intercept operation for the live picture. The enlarged live content 503 and live content 504 are then presented in a second area 502, as shown in particular in fig. 6.
If the viewer clicks the "default picture" button shown in fig. 6, the vertical screen live room interface will resume the live room interface as shown in fig. 5. The viewer may also click on the "cut" button shown in fig. 6 to continue cutting the live view.
In the embodiment of the application, the video picture is displayed in the first area of the display interface, at least one video content is intercepted from the video picture in response to the intercepting operation aiming at the video picture, the video content of interest to the user is obtained, and then the amplified at least one video content is displayed in the second area of the display interface, so that the blank area in the display interface is well utilized, the waste of display space is avoided, the user can watch the amplified and interested video content, the problem of inconvenience of watching the video content by the user caused by the too small video picture is solved, and the video watching experience of the user is further improved.
Optionally, in step S202 described above, at least one cut-out range in the video picture is obtained in response to the cut-out operation for the video picture, and then at least one video content is cut out from the video picture based on the at least one cut-out range.
Specifically, each interception range corresponds to one or more video contents, and the interception range can be a preset shape such as rectangle, circle, square and the like, or can be a custom shape. The intercepting operation may be one or more of a sliding operation, a clicking operation, a double-clicking operation, a long-press operation, etc.
In one possible implementation manner, the intercepting operation is a sliding operation, and after the terminal device starts the cutting process, the gesture of the user is detected in real time. When the finger of the user is detected to start sliding on the screen, the coordinates of the finger track are recorded in real time. And then determining all pixel points in a finger track coordinate area (namely a interception range) according to the finger track coordinates, and recording pixel point data in the finger track coordinate area, wherein the pixel point data comprises pixel point coordinates and pixel point color values, and the finger track is a closed loop.
The terminal equipment establishes a new layer, and maps pixel point data in the finger track coordinate area onto the new layer in real time to obtain video content intercepted from a video picture.
Optionally, the new layer established by the terminal device is consistent with the video picture, then the transparency of the region in the coordinate region of the finger track is set to be 1, and the transparency of the region outside the coordinate region of the finger track is set to be 0, so that the new layer is ensured to only display the video content in the coordinate region of the finger track, and the video picture is cut.
For example, referring to fig. 5, a schematic diagram of a vertical screen display interface according to an embodiment of the present application is provided, where the vertical screen display interface includes a first area 501 and a second area 502, the first area 501 includes a live broadcast picture and a bullet screen sent by a viewer, and the second area 502 is a blank area, and the vertical screen display interface further includes a "cut" button.
On the basis of the vertical screen display interface shown in fig. 5, the process of capturing live broadcast content from a live broadcast picture by the terminal device includes the following steps, as shown in fig. 7:
the viewer clicks the "cut" button and the terminal device initiates the cutting procedure. The terminal equipment detects the gesture of the user in real time, and records the track coordinates of the finger in real time when detecting that the finger of the audience starts to slide on the screen. According to the finger track coordinates, all pixels in the intercepting range 505 and the intercepting range 506 are determined, and pixel data in the intercepting range 505 and the intercepting range 506 are recorded, wherein the pixel data specifically comprises pixel coordinates and pixel color values.
At this time, the terminal device displays the interception range 505 and the interception range 506 in the first area 501 of the display interface, and simultaneously displays the "ok" button, as shown in fig. 8.
The viewer clicks the "ok" button, and the terminal device establishes a new layer, and then sets the transparency of the region within the interception range 505 and the interception range 506 to 1, and sets the transparency of the region outside the interception range 505 and the interception range 506 to 0, thereby obtaining the live content 503 and the live content 504. The enlarged live content 503 and live content 504 are then presented in a second area 502, as shown in particular in fig. 6.
In another possible implementation manner, the intercepting operation is a double-click operation, and after the terminal device starts the cutting process, the gesture of the user is detected in real time. When a double click of the user or the host player finger on the screen is detected, the position coordinates of the double click of the finger are recorded. And then, taking the position coordinates of double clicking of the finger as the center, and determining the pixel point coordinates and the pixel point color values of all the pixel points in the preset range (namely the intercepting range).
The terminal equipment establishes a new layer, and maps pixel point coordinates and pixel point color values of all pixel points in a preset range onto the new layer in real time to obtain video content intercepted from a video picture.
Optionally, the new layer built by the terminal device is consistent with the video picture, the transparency of the area within the preset range is set to be 1, and the transparency of the area outside the preset range is set to be 0, so that the new layer is ensured to only display the video content within the preset range, and the cutting of the video picture is realized.
For example, referring to fig. 5, a schematic diagram of a vertical screen display interface according to an embodiment of the present application is provided, where the vertical screen display interface includes a first area 501 and a second area 502, the first area 501 includes a live broadcast picture and a bullet screen sent by a viewer, and the second area 502 is a blank area, and the vertical screen display interface further includes a "cut" button.
The viewer clicks the "cut" button and the terminal device initiates the cutting procedure. The terminal equipment detects the gesture of the user in real time, and records the position coordinates of double-click of the finger when the double-click of the finger of the audience on the screen is detected. And then, taking the position coordinates of double clicking of the finger as the center, and determining the pixel point coordinates and the pixel point color values of all the pixel points in the preset rectangular range (namely the intercepting range).
At this time, the terminal device displays the interception range 507 in the first area 501 of the display interface, and simultaneously displays a "ok" button, as shown in fig. 9.
The viewer clicks the "ok" button, and the terminal device creates a new layer, sets the transparency of the area within the interception range 507 to 1, and sets the transparency of the area outside the interception range 507 to 0, thus obtaining the live content 508. The enlarged live content 508 is then presented in the second area 502, as shown in particular in fig. 10.
It should be noted that the intercepting operation is not limited to the above two operations, but may be other operations, which are not described herein.
In the embodiment of the application, at least one interception range in the video picture is obtained in response to the interception operation for the video picture, and then at least one video content is intercepted from the video picture based on the at least one interception range, so that various screenshot selections are provided for different users, the different video content is intercepted according to the interests of the users, and the second area of the display interface is enlarged and displayed, so that the users can more clearly watch the interesting video content, and the video watching experience of the users is improved.
Optionally, for each of the above-mentioned interception ranges, the content located in one interception range may be intercepted as one video content, or at least one object located in one interception range and having a degree of completeness meeting a preset condition may be intercepted as at least one video content.
For example, referring to fig. 11, a schematic diagram of a vertical screen display interface according to an embodiment of the present application is provided, where the vertical screen display interface includes a first area 501 and a second area 502, the first area 501 includes a live broadcast picture and a bullet screen sent by a viewer, and the second area 502 is a blank area, and the vertical screen display interface further includes a "cut" button. The terminal device obtains a capturing range 1101 in the live view in response to the capturing operation for the live view.
One possible implementation is to take the content in the interception area 1101 as live content 1102 and then present the enlarged live content 1102 in a second area, as shown in fig. 12 in particular.
In one possible implementation, the interception range 1101 includes an object a, an object B, and an object C, and then integrity recognition is performed on the object a, the object B, and the object C in the interception range 1101. Since the integrity of the object a is highest, the object a is taken as the live content 1103, and then the enlarged live content 1103 is displayed in the second area, as shown in fig. 13.
In one possible implementation, the interception range 1101 includes an object a, an object B, and an object C, and then integrity recognition is performed on the object a, the object B, and the object C in the interception range 1101. Since the integrity of the object a and the integrity of the object B are both greater than the preset threshold, and the integrity of the object C is both less than the preset threshold, the object a is taken as the live content 1103, the object B is taken as the live content 1104, and then the amplified live content 1103 and the amplified live content 1104 are respectively displayed in the second area, as shown in fig. 14.
It should be noted that, the embodiments for acquiring the video content from the capturing range of the video frame are not limited to the above embodiments, but may be other manners, which are not described herein.
In the embodiment of the application, the terminal equipment responds to the interception operation aiming at the video picture to obtain the interception range in the video picture, and then carries out integrity recognition on the video content in the interception range to obtain the video content with high integrity, so that when the video content is displayed in the second area of the display interface in an enlarged manner, the user can more clearly see the interested and complete video content, thereby improving the video watching experience of the user.
Optionally, after at least one video content is intercepted from the video picture in response to the intercepting operation for the video picture, the preset content can be filled in at least one intercepting range after the corresponding video content is intercepted, wherein the preset content can be an image with a preset color, an image intercepted from the video picture and the like, original video content can be reserved in at least one intercepting range after the corresponding video content is intercepted, part of intercepting range can be selected to reserve original video content in at least one intercepting range after the corresponding video content is intercepted, and other intercepting ranges can be filled in the preset content.
For example, referring to fig. 8, a schematic diagram of a vertical screen display interface according to an embodiment of the present application is provided, where the vertical screen display interface includes a first area 501 and a second area 502, the first area 501 includes a live broadcast picture and a barrage sent by a viewer, and the second area 502 is a blank area. The terminal device obtains a capturing range 505 and a capturing range 506 in the live view in response to the capturing operation for the live view.
One possible implementation would be to have the content in the interception range 505 as live content 503 and the content in the interception range 506 as live content 504. The enlarged live content 503 and live content 504 are then presented in a second area of the display interface. The preset patterns are filled in the intercepting range 505 and the intercepting range 506 in the first area, and the display interface after filling the preset patterns is shown in fig. 15.
One possible implementation would be to have the content in the interception range 505 as live content 503 and the content in the interception range 506 as live content 504. The enlarged live content 503 and live content 504 are then presented in a second area of the display interface. The original content is retained in the interception range 505 and the interception range 506 in the first area, and a display interface for retaining the original content is shown in fig. 16.
One possible implementation would be to have the content in the interception range 505 as live content 503 and the content in the interception range 506 as live content 504. The enlarged live content 503 and live content 504 are then presented in a second area of the display interface. The original content is retained in the interception range 505 in the first area, and the preset pattern is filled in the interception range 506 in the first area, as shown in fig. 17.
It should be noted that, the processing manner in at least one interception range after intercepting the corresponding video content is not limited to the above several manners, but may be other manners, which are not repeated herein.
In the embodiment of the application, after the terminal equipment responds to the intercepting operation aiming at the video picture and intercepts at least one video content from the intercepting range, the intercepting range is filled with the preset content or the original content in the intercepting range is reserved, so that the integral integrity of the display interface is ensured.
Optionally, in the step S203, when the second area in the display interface displays the enlarged at least one video content, the embodiments of the present application at least provide the following implementations:
in the first embodiment, at least one video content is displayed in the second region of the display interface in accordance with the boundary shape of the second region.
Specifically, the entire second region is filled with at least one video content such that the boundary shape of the filled at least one video content is identical to and overlaps the boundary shape of the second region. In the filling process, the boundary of the video content can be cut, and the video content can be expanded by adopting a preset rule.
For example, referring to fig. 18, a schematic diagram of a vertical screen display interface according to an embodiment of the present application is provided, where the vertical screen display interface includes a first area 501 and a second area 502, the first area 501 includes a live broadcast picture and a bullet screen sent by a viewer, and the second area 502 includes live broadcast content that is intercepted and enlarged from the first area 501, where the intercepted live broadcast content fills the entire second area 502. In addition, the vertical screen display interface also comprises a cutting button and a default picture button.
In the embodiment of the application, the at least one amplified video content is displayed in the second area in the display interface according to the boundary shape of the second area, so that the blank area in the display interface is fully utilized, and the waste of display space is avoided.
In the second embodiment, at least one video content amplified according to a preset ratio is displayed in a second area in the display interface.
Specifically, the preset ratio may be set in actual situations. When at least one video content amplified according to the preset proportion is displayed, at least one video content can be arranged in the second area and at least one video content is not overlapped, the at least one video content can also be arranged in the second area and at least one video content is partially overlapped, and all or part of the at least one video content can be cut or filled and then displayed.
For example, referring to fig. 19, a schematic diagram of a vertical screen display interface according to an embodiment of the present application is provided, where the vertical screen display interface includes a first area 501 and a second area 502, the first area 501 includes a live broadcast picture and a bullet screen sent by a viewer, and the second area 502 includes live broadcast content that is truncated from the first area 501 and is doubled. In addition, the vertical screen display interface also comprises a cutting button and a default picture button.
In the embodiment of the application, at least one video content amplified according to the preset proportion is displayed in the second area in the display interface, so that the blank area in the display interface is well utilized, the waste of display space is avoided, and meanwhile, a user can watch the amplified and interested video content, thereby improving the video watching experience of the user.
Optionally, after the second area in the display interface displays the enlarged at least one video content, the embodiments of the present application provide at least the following several embodiments for performing display attribute adjustment on the video content and the video frame:
in a first embodiment, after the enlarged at least one video content is displayed in the second area of the display interface, a first display attribute of the at least one video content in the display interface is adjusted in response to a display attribute adjustment operation for the at least one video content, wherein the first display attribute includes a display position and/or a display size of the at least one video content.
Specifically, the presentation attribute adjustment operation may be a preset user gesture. The display attribute adjustment operation is used for triggering and adjusting the display position and/or the display size of the video content in the second area or triggering and adjusting the display position and/or the display size of the video content in the whole display interface. For example, when the user gesture is monitored to be a moving operation, the video content moves along with the user gesture, when the user gesture is monitored to be an enlarging operation, the video content enlarges along with the user gesture, and when the user gesture is monitored to be a shrinking operation, the video content shrinks along with the user gesture.
In addition to adjusting the first presentation attribute of the at least one video content in the display interface, in embodiments of the present application, a second presentation attribute of the video picture in the display interface may be adjusted in response to a presentation attribute adjustment operation for the video picture, where the second presentation attribute includes a presentation position and/or a presentation size of the video picture.
For example, on the basis of the display interface shown in fig. 16, when the terminal device monitors that the user gesture for the live content 503 is an zoom-in operation, the live content 503 is zoomed in along with the user gesture, and when the terminal device monitors that the user gesture for the live content 504 is a zoom-out operation, the live content 503 is zoomed out along with the user gesture. When the terminal equipment monitors that the user gesture aiming at the live broadcast picture is the zoom-out operation, the live broadcast picture is zoomed out along with the user gesture, and the display interface after attribute adjustment is displayed is shown in fig. 20.
For example, on the basis of the display interface shown in fig. 16, when the terminal device monitors that the user gesture for the live content 503 is a moving operation, the live content 503 is moved to the position of the live content 504 along with the user gesture, and when the terminal device monitors that the user gesture for the live content 504 is a moving operation, the live content 504 is moved to the position of the live content 503 along with the user gesture, the display interface with the display attribute adjusted is shown in fig. 21.
In the embodiment of the application, the display attribute of the video content and the video picture in the display interface is adjusted in response to the display attribute adjustment operation for the video content and the video picture, so that the user can carry out custom typesetting on the display interface according to self preference, and the video content favored by the user is displayed at a better display position in the display interface, thereby improving the video watching experience of the user.
In a second embodiment, after the enlarged at least one video content is displayed in the second area in the live broadcasting room interface, in response to a layout adjustment operation for the display interface, a first display attribute of the at least one video content in the display interface and a second display attribute of the video frame in the display interface are synchronously adjusted, where the first display attribute includes a display position and/or a display size of the at least one video content, and the second display attribute includes a display position and/or a display size of the video frame.
Specifically, the layout adjustment operation may be a preset user gesture, and the layout adjustment operation is used to trigger to synchronously adjust the display position and/or the display size of the video content and the video frame in the whole display interface.
In the embodiment of the application, the terminal equipment responds to the layout adjustment operation to adjust the layout of the whole display interface, so that a user can globally adjust the display interface according to self preference, the interested video content is amplified and displayed at a better position in the display interface, and the uninteresting content is reduced and displayed at a poorer display position, thereby improving the video watching experience of the user.
Optionally, when the layout of the display interface is adjusted, the layout of the display interface can be adjusted manually, and the display interface can be automatically adjusted through a layout template.
The following is a detailed description of the automated adjustment of the display interface via the layout template:
And displaying the template identifiers corresponding to the at least one recommended layout template in the display interface, wherein the at least one recommended layout template is determined according to the quantity of the at least one video content. And in response to a selection operation for each template identification of the presentation, synchronously adjusting a first presentation attribute of at least one video content in the display interface and a second presentation attribute of the video picture in the display interface according to the selected target layout template.
In particular, the template identification of the recommended layout template may be the name, thumbnail, or the like of the recommended layout template. A layout template library is preset, wherein the layout template library comprises a plurality of layout templates, and each layout model corresponds to at least one layout template style. Different kinds of layout templates correspond to different numbers of pictures.
For example, referring to fig. 22, a schematic diagram of a layout template library according to an embodiment of the present application is provided. In the layout template library, when the number of pictures is 2, the corresponding layout templates divide the display interface into two parts, and the layout templates correspond to two layout template patterns. When the number of pictures is 3, the corresponding layout templates divide the display interface into three parts, and the layout templates correspond to three layout template patterns. And so on until the number of pictures is n, wherein n is a positive integer.
The terminal device intercepts at least one video content from the video frames in response to an intercept operation for the live frames. The number of pictures is then determined based on the at least one video content and the video pictures. And acquiring at least one recommended layout template from the layout template library based on the number of pictures. And finally, displaying the template identifiers corresponding to the at least one recommended layout template in the display interface.
For example, in the display interface shown in fig. 16, if the total number of frames of the live content and the live frames is 3, the layout templates corresponding to the three layout template patterns when the number of frames in the layout template library is 3 are used as the recommended layout templates. And then, respectively corresponding template thumbnails of the 3 recommended layout templates are displayed in a display interface, as shown in fig. 23.
In the embodiment of the application, the display interface is automatically adjusted through the layout template, so that the video content which is interested by the user is displayed at a better position in the display interface, and the adjustment efficiency of the display interface is improved.
Optionally, in response to a selection operation for each template identification presented, a target layout template is selected from the at least one recommended layout template. And respectively adjusting the display sizes of at least one video content and video pictures according to the target layout template, and respectively filling the adjusted at least one video content and video pictures into corresponding display areas in the display interface.
Specifically, the display area corresponding to at least one video content and video picture in the target layout template may be preset, or may be set in the target layout template when the user selects the target layout template.
And when the adjusted video content is filled into the corresponding display area in the display interface, if the video content exceeds the corresponding display area, cutting the video content. And if the display area also comprises a blank area after the video content is filled into the corresponding display area in the display interface, filling preset content in the blank area of the display area.
And similarly, when the adjusted video picture is filled into the corresponding display area in the display interface, if the video picture exceeds the corresponding display area, cutting the video picture. And if the display area also comprises a blank area after the video picture is filled into the corresponding display area in the display interface, filling preset content in the blank area of the display area.
For example, on the basis of the display interface shown in fig. 23, in response to a selection operation for the 3 template identifiers shown, the terminal device selects a first template identifier from the 3 template identifiers, and takes a recommended layout template corresponding to the first template identifier as the target layout template. And then, according to the target layout template, the display sizes of the live broadcast content 503 and the live broadcast content 504 are adjusted, and the adjusted live broadcast content 503, live broadcast content 504 and live broadcast pictures are respectively filled into corresponding display areas in a display interface, wherein the display interface obtained after filling is shown in fig. 24.
Optionally, according to the target layout template, the respective display sizes of the at least one video content and the video picture are respectively adjusted, and after the adjusted at least one video content and the video picture are respectively filled into the corresponding display areas in the display interface, the respective display areas of the at least one video content and the video picture can be adjusted in response to the user operation, or the display attribute of the displayed video content or video picture is adjusted in one display area.
In particular implementations, video content or video frames in one presentation area may be dragged to another presentation area for presentation. Either zooming in or zooming out of the video content or video picture in one of the presentation areas or moving the video content or video picture in one of the presentation areas.
For example, on the basis of the display interface shown in fig. 24, in response to a user operation, the live content 503 is dragged to a display area originally corresponding to the live content 504, the live content 504 is dragged to a display area originally corresponding to the live content 503, and the adjusted display interface is shown in fig. 25.
In the embodiment of the application, the display interface is automatically adjusted through the layout template, and after the automatic adjustment, the display position and/or the display size of each video content in the target layout template are supported to be adjusted, so that a user can adjust the display attribute of each video content in the target layout template according to the needs, and convenience is brought to the user to watch the video content.
In order to better explain the embodiment of the present application, the following describes a flow of a video playing method provided by the embodiment of the present application in combination with a live scene, where the flow of the method is executed by a terminal device.
Referring to fig. 26, a schematic diagram of a live room interface provided by an embodiment of the present application, where the live room interface includes a live region 2601 and a blank region 2602, and the live region 2601 includes a live screen, and the vertical screen live room interface further includes a "cut" button.
The viewer clicks the "cut" button and the terminal device initiates the cutting procedure. The fingers of the audience slide on the screen, the terminal equipment detects the gestures of the user in real time, and when the fingers of the audience start to slide on the screen, the coordinates of the track of the fingers are recorded in real time. According to the finger trajectory coordinates, all the pixels in the interception range 2603 and the interception range 2604 are determined, and pixel data in the interception range 2603 and the interception range 2604 are recorded, wherein the pixel data specifically comprises pixel coordinates and pixel color values. At this time, the terminal device displays the interception range 2603 and the interception range 2604 in the live broadcast region 2601 of the live broadcast room interface, and simultaneously displays a "ok" button, as shown in fig. 27.
The viewer clicks the "ok" button, and the terminal device establishes a new layer, sets the transparency of the regions within the interception range 2603 and the interception range 2604 to 1, sets the transparency of the regions outside the interception range 2603 and the interception range 2604 to 0, and obtains live content 2605 and live content 2606. Referring to fig. 28, enlarged live content 2605 and live content 2606 are shown in second region 2602, and preset content is filled in interception range 2603 and interception range 2604. And taking the layout templates of the three corresponding layout template patterns when the number of the pictures in the layout template library is 3 as recommended layout templates, and displaying template thumbnails corresponding to the 3 recommended layout templates in a live broadcasting room interface.
Referring to fig. 29, the viewer clicks a first one of the template thumbnails corresponding to each of the 3 recommended layout templates, and selects the first recommended layout template from the 3 recommended layout templates as the target layout template. And then, according to the target layout template, the display sizes of the live broadcast content 2605 and the live broadcast content 2606 are adjusted, and the adjusted live broadcast content 2605, the live broadcast content 2606 and the live broadcast picture are respectively filled into corresponding display areas in the live broadcast room interface.
In the embodiment of the application, the live broadcast picture is subjected to screenshot operation by detecting the gesture of the user to obtain the live broadcast content of interest to the user, then the user-defined typesetting is performed on the intercepted live broadcast content based on the user operation, and the live broadcast content of interest to the user is amplified and displayed at a better display position in the screen, so that the space of the interface between the live broadcast is fully utilized, and the user can watch the live broadcast content of interest conveniently, thereby improving the live broadcast watching experience of the user.
Based on the same technical concept, an embodiment of the present application provides a schematic structural diagram of a video playing device, as shown in fig. 30, the device 3000 includes:
the display module 3001 is configured to display a video frame in a first area in the display interface;
a clipping module 3002, configured to clip at least one video content from the video picture in response to a clipping operation for the video picture;
The display module 3001 is further configured to display the at least one enlarged video content in a second area in the display interface.
Optionally, the display interface includes a vertical screen display interface and a horizontal screen display interface, where the vertical screen display interface is a display interface when the terminal device is in a vertical screen state, and the horizontal screen display interface is a display interface when the terminal device is in a horizontal screen state.
Optionally, the display module 3001 is specifically configured to:
Displaying the at least one video content in a second area of the display interface in accordance with the boundary shape of the second area, or
And displaying the at least one video content which is amplified according to a preset proportion in a second area in the display interface.
Optionally, an adjustment module 3003 is also included;
The adjustment module 3003 is specifically configured to:
and after the enlarged at least one video content is displayed in the second area of the display interface, adjusting a first display attribute of the at least one video content in the display interface in response to a display attribute adjustment operation for the at least one video content, wherein the first display attribute comprises a display position and/or a display size of the at least one video content.
Optionally, an adjustment module 3003 is also included;
The adjustment module 3003 is specifically configured to:
After the enlarged at least one video content is displayed in the second area of the display interface, in response to a layout adjustment operation for the display interface, a first display attribute of the at least one video content in the display interface and a second display attribute of the video picture in the display interface are synchronously adjusted, wherein the first display attribute comprises a display position and/or a display size of the at least one video content, and the second display attribute comprises a display position and/or a display size of the video picture.
Optionally, the adjusting module 3003 is specifically configured to:
Displaying template identifiers corresponding to at least one recommended layout template in the display interface, wherein the at least one recommended layout template is determined according to the number of the at least one video content;
And in response to a selection operation for each template identifier of the presentation, synchronously adjusting a first presentation attribute of the at least one video content in the display interface and a second presentation attribute of the video picture in the display interface according to the selected target layout template.
Optionally, the adjusting module 3003 is specifically configured to:
and respectively adjusting the display sizes of the at least one video content and the video picture according to the target layout template, and respectively filling the adjusted at least one video content and video picture into the corresponding display area in the display interface.
Optionally, the intercepting module 3002 is specifically configured to:
Obtaining at least one cut-out range in the video picture in response to a cut-out operation for the video picture;
Based on the at least one capture range, capturing the at least one video content from the video picture.
Optionally, the intercepting module 3002 is specifically configured to:
for the at least one interception range, the following steps are respectively executed:
intercepting content within an intercepting range in the video picture as a video content, or
And intercepting at least one object which is positioned in an intercepting range and the integrity degree of which meets the preset condition in the video picture, and taking the object as video content.
Optionally, the interception module 3002 is further configured to:
In response to a capture operation for the video frame, filling a preset content in at least one capture range after capturing at least one video content from the video frame, or
And retaining the original video content in the at least one interception range after intercepting the corresponding video content.
In the embodiment of the application, the screenshot operation is carried out on the video picture by detecting the gesture of the user to obtain the video content interested by the user, then the user-defined typesetting is carried out on the intercepted video content based on the user operation, and the video content interested by the user is amplified and displayed at a better display position in the screen, so that the space of the display interface is fully utilized, and meanwhile, the user can watch the interested video content conveniently, thereby improving the video watching experience of the user.
Based on the same technical concept, the embodiment of the present application provides a computer device, as shown in fig. 31, including at least one processor 3101 and a memory 3102 connected to the at least one processor, where a specific connection medium between the processor 3101 and the memory 3102 is not limited in the embodiment of the present application, and in fig. 31, the processor 3101 and the memory 3102 are connected by a bus as an example. The buses may be divided into address buses, data buses, control buses, etc.
In an embodiment of the present application, the memory 3102 stores instructions executable by the at least one processor 3101, and the at least one processor 3101 may perform the steps of the video playing method described above by executing the instructions stored in the memory 3102.
Where the processor 3101 is a control center of a computer device, various interfaces and lines may be used to connect various portions of the computer device, through execution or execution of instructions stored in the memory 3102 and invocation of data stored in the memory 3102, to effect video playback. Optionally, the processor 3101 may include one or more processing units, and the processor 3101 may integrate an application processor and a modem processor, wherein the application processor primarily processes operating systems, user interfaces, application programs, etc., and the modem processor primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 3101. In some embodiments, the processor 3101 and the memory 3102 may be implemented on the same chip, and in some embodiments they may also be implemented separately on separate chips.
The processor 3101 may be a general purpose processor such as a Central Processing Unit (CPU), digital signal processor, application SPECIFIC INTEGRATED Circuit (ASIC), field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution.
The memory 3102 is used as a nonvolatile computer-readable storage medium for storing nonvolatile software programs, nonvolatile computer-executable programs, and modules. The Memory 3102 may include at least one type of storage medium, and may include, for example, flash Memory, hard disk, multimedia card, card Memory, random access Memory (Random Access Memory, RAM), static random access Memory (Static Random Access Memory, SRAM), programmable Read-Only Memory (Programmable Read Only Memory, PROM), read-Only Memory (ROM), charged erasable programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), magnetic Memory, magnetic disk, optical disk, and the like. Memory 3102 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 3102 in embodiments of the application may also be circuitry or any other device capable of performing memory functions for storing program instructions and/or data.
Based on the same inventive concept, an embodiment of the present application provides a computer-readable storage medium storing a computer program executable by a computer device, which when run on the computer device, causes the computer device to perform the steps of the video playing method described above.
Based on the same inventive concept, embodiments of the present application provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the steps of the video playback method described above.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, or as a computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.