Disclosure of Invention
Embodiments of the present application provide a method, apparatus, computer device, computer readable storage medium, computer program product for displaying a headset video, to solve or alleviate one or more of the technical problems set forth above.
One aspect of the embodiment of the application provides a method for displaying a wheat-linked video, which is used for a server and comprises the following steps:
Responding to a wheat connecting request, and acquiring a mixed live stream, wherein the mixed live stream comprises an initial wheat connecting picture;
The method comprises the steps of performing transcoding operation on the mixed-picture live broadcast stream to obtain a live broadcast code stream, wherein the live broadcast code stream carries supplementary enhancement information, and the transcoding operation comprises the steps of obtaining the resolution of an initial continuous-cast picture, comparing the resolution of the initial continuous-cast picture with a preset resolution, and adjusting the initial continuous-cast picture to obtain a target continuous-cast picture which is the same as the preset resolution under the condition that the resolution of the initial continuous-cast picture is different from the preset resolution, wherein the target continuous-cast picture comprises a target video picture;
And pushing the live code stream to a client, wherein the client is used for acquiring the target video picture from the live code stream according to the supplemental enhancement information and displaying the target video picture through a preset player.
Optionally, the acquiring the live mixed-picture stream includes:
acquiring a first direct broadcast stream of a main broadcasting end and a second direct broadcast stream of a wheat connecting end;
Combining the first direct broadcast stream and the second direct broadcast stream to obtain the mixed live broadcast stream;
the merging comprises merging the video pictures of the first direct broadcast stream and the video pictures of the second direct broadcast stream to obtain the initial wheat connecting picture.
Optionally, the method for displaying the wheat-linked video further comprises the following steps:
responding to a wheat connecting disconnection request, and acquiring the latest first direct-current stream;
Transcoding is carried out on the latest first direct-broadcasting stream, and the supplemental enhancement information is updated according to the resolution of the video picture of the latest first direct-broadcasting stream, so that the latest direct-broadcasting code stream is obtained, and the latest direct-broadcasting code stream carries the updated supplemental enhancement information;
and pushing the latest live code stream to the client, wherein the client is further used for acquiring the video picture of the latest first direct-current stream from the latest live code stream according to the updated supplemental enhancement information and displaying the video picture through a preset player.
Optionally, adjusting the initial wheat linking picture includes:
scaling the initial wheat connecting picture according to the preset resolution to obtain the target video picture;
setting a black edge area at the edge of the zoomed initial wheat connecting picture so as to obtain the target wheat connecting picture;
the target wheat-connected picture comprises the scaled target video picture and the black edge area.
Optionally, the method for displaying the wheat-linked video further comprises the following steps:
Recording the live code stream and the supplementary enhancement information to generate an on-demand file;
and pushing the on-demand file to the client in response to the on-demand request.
Another aspect of the embodiment of the present application provides a method for displaying a link video, which is used for a client, and the method includes:
acquiring a live code stream or an on-demand file carrying supplemental enhancement information, wherein the live code stream or the on-demand file comprises an original video picture, the original video picture comprises a target video picture and a black border area, and the supplemental enhancement information comprises the resolution of the target video picture;
According to the supplementary enhancement information, the original video picture is adjusted to obtain the target video picture;
and displaying the target video picture through a preset player.
Optionally, adjusting the original video frame according to the supplemental enhancement information to obtain the target video frame includes:
determining the resolution of the target video picture according to the supplementary enhancement information;
And identifying and extracting the target video picture in the original video picture according to the resolution ratio of the target video picture.
Another aspect of the embodiment of the present application provides a device for displaying a wheat-linked video, for a server, the device including:
The acquisition module is used for responding to the wheat connecting request and acquiring a mixed live stream, wherein the mixed live stream comprises an initial wheat connecting picture;
The transcoding module is used for performing transcoding operation on the mixed-picture live broadcast stream to obtain a live broadcast code stream, wherein the live broadcast code stream carries supplementary enhancement information, and the transcoding operation comprises the steps of obtaining the resolution of an initial continuous-cast picture, comparing the resolution of the initial continuous-cast picture with a preset resolution, adjusting the initial continuous-cast picture to obtain a target continuous-cast picture which is the same as the preset resolution under the condition that the resolution of the initial continuous-cast picture is different from the preset resolution, wherein the target continuous-cast picture comprises a target video picture, and updating the supplementary enhancement information through the resolution of the target video picture;
The pushing module is used for pushing the live code stream to a client, and the client is used for acquiring the target video picture from the live code stream according to the supplementary enhancement information and displaying the target video picture through a preset player.
Another aspect of the embodiment of the present application provides a device for displaying a wheat-linked video, for a client, the device including:
The first acquisition module is used for acquiring a live code stream or a video-on-demand file carrying supplementary enhancement information, wherein the live code stream or the video-on-demand file comprises an original video picture, the original video picture comprises a target video picture and a black border area, and the supplementary enhancement information comprises the resolution of the target video picture;
The second acquisition module is used for adjusting the original video picture according to the supplementary enhancement information so as to obtain the target video picture;
And the playing module is used for displaying the target video picture through a preset player.
Another aspect of an embodiment of the present application provides a computer apparatus, including:
At least one processor, and
A memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
Another aspect of embodiments of the present application provides a computer-readable storage medium having stored therein computer instructions which, when executed by a processor, implement a method as described above.
Another aspect of embodiments of the application provides a computer program product comprising a computer program which, when executed by a processor, implements a method as described above.
The embodiment of the application adopts the technical scheme and can have the following advantages:
And when the live broadcast is connected with the wheat, acquiring the mixed-picture live broadcast stream comprising the initial wheat connecting picture, and executing transcoding operation to obtain the live broadcast code stream carrying the supplementary enhancement information. The transcoding operation includes comparing the resolution of the initial headset picture with a predetermined resolution. Under the condition that the two images are different, the initial continuous-time image is adjusted to obtain a target continuous-time image containing the target video image, and the resolution of the target continuous-time image is the same as the preset resolution. And adding the resolution of the target video picture in the supplemental enhancement information of the live code stream. And pushing the live code stream to the client, so that the client can acquire a target video picture in the live code stream according to the supplementary enhancement information and display the target video picture through the player. It can be known that, in the embodiment of the application, the resolution of the target video picture is added in the SEI (supplemental enhancement information) of the live code stream, so that the player can accurately identify the actual content area and play the actual content area. The method can reduce the problems of picture proportion distortion and black edges in scenes such as horizontal and vertical screen switching, straight turning points and the like, thereby improving the viewing experience of live broadcasting and wheat connecting.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that the descriptions of "first," "second," etc. in the embodiments of the present application are for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present application.
In the description of the present application, it should be understood that the numerical references before the steps do not identify the order in which the steps are performed, but are merely used to facilitate description of the present application and to distinguish between each step, and thus should not be construed as limiting the present application.
First, a term explanation is provided in relation to the present application:
live broadcasting, namely transmitting media contents such as video and audio in real time through the Internet, wherein audiences can watch the contents at the same time, and the live broadcasting method can be used for real-time sharing of the contents such as events, performances, games and the like.
On demand, selecting and playing media files (e.g., movies, television shows, videos, etc.) stored on a server. Unlike live broadcast, on-demand content does not occur in real time, and viewers can watch at any time.
And (3) a direct point, namely synchronously converting the live broadcast content into on-demand content after the live broadcast is finished or in the live broadcast process. Viewers missing live may view the content on demand.
In the live broadcasting process, the host broadcasting and other host broadcasting or guests are connected in real time on video and audio to form interaction, so that the interest and interaction of live broadcasting can be increased.
And switching the horizontal screen and the vertical screen, namely switching the video picture from the horizontal screen (wide screen) to the vertical screen (narrow screen) or from the vertical screen to the horizontal screen according to the direction of the content or the equipment in the video playing or live broadcasting process. For mobile devices, the viewer may adjust the viewing mode as desired.
The video transcoding process, which is to convert a video file from one coding format to another coding format, involves parameters such as video resolution, code rate, frame rate, etc., so as to adapt to different playing environments or meet specific quality requirements. Transcoding can improve video compatibility and playback efficiency.
Next, in order to facilitate understanding of the technical solutions provided by the embodiments of the present application by those skilled in the art, the following description is made on related technologies:
In the live broadcast field, live broadcast and wheat linking are one of the common interaction modes. The headset allows multiple users to communicate video and audio in real time in a live broadcast. The linking may cause a push resolution change, causing player compatibility problems. To solve the foregoing problems, the present inventors have appreciated that the resolution may be changed at the time of merging, for example, black edges may be added above and below the video picture to keep the resolution uniform, and that sequence parameter set information (SPS) and picture parameter set information (PPS) may also be added at each I frame (key frame) of the transcoding to assist the player in judging the resolution change.
However, the method has the defects that (1) the transcoding process can not respond to the resolution in time, so that the proportion of the content screen or the picture is distorted and distorted, (2) the player is switched by the horizontal screen and the vertical screen, so that the viewing experience is affected, and (3) the method is not compatible with a straight-through point.
Therefore, the embodiment of the application provides a technical scheme for displaying the wheat-linked video. In the technical scheme, in the merging process, the resolution of the actual content area is added into the Supplemental Enhancement Information (SEI) of the code stream, so that the player can accurately identify and play the actual content area according to the SEI, ignore the black edge, keep the picture proportion and the viewing experience, and are compatible and improve the live broadcast and continuous microphone experience of scenes such as a straight-through point, a horizontal-vertical screen switching and the like. See in particular below.
Finally, for ease of understanding, an exemplary operating environment is provided below.
As shown in fig. 1, the operating environment diagram includes a service platform 2, anchor terminals (4A, 4B,..4M), audience terminals (6A, 6B,..6N). In a live broadcast scene, a host terminal (4A, 4B,) and 4M logs in the service platform 2, and live broadcast data is pushed to audience terminals (6A, 6B,) and 6N in real time through the service platform 2.
The service platform 2 may provide live room services, which may be a single server, a cluster of servers, or a cloud computing service center.
And the anchor terminal (4A, 4B, 4M) is used for generating live broadcast data in real time and performing push stream operation of the live broadcast data. The live data may include audio data or video data. The anchor terminal can be an electronic device such as a smart phone, a tablet computer and the like. Of course, the anchor terminal may be a virtual computing instance within the service platform 2.
The audience terminals (6A, 6B,..6N) may be configured to receive live data of the anchor terminal in real time. The audience terminals (6A, 6B,..6N) may be any type of computing device, such as smartphones, tablet devices, laptop computers, smart televisions, car terminals, etc. The audience terminals (6A, 6B,..6N) may have a built-in browser or special program through which the live data is received to output content to the user. The content may include video, audio, comments, text data, and/or the like.
The audience terminals (6A, 6B,..6N) may include players. The player outputs (e.g., presents) the content to the user. Wherein the content may include video, audio, comments, text data, and/or the like. The audience terminals (6A, 6B,..6N) may include an interface, which may include an input element (touch screen). For example, the input element may be configured to receive user instructions that may cause the audience terminals (6A, 6B,..6N) to perform various types of operations, such as sending a barrage, inputting comments, gifting a gift, and the like.
The anchor terminals (4A, 4B,..4M), the audience terminals (6A, 6B,..6N) and the service platform 2 may be connected by a network. The network may include various network devices such as routers, switches, multiplexers, hubs, modems, bridges, repeaters, firewalls, and/or proxy devices, etc. The network may include physical links such as coaxial cable links, twisted pair cable links, fiber optic links, combinations thereof, and/or the like. The network may include wireless links, such as cellular links, satellite links, wi-Fi links, and/or the like.
It should be noted that the number of anchor terminals and audience terminals in the figures is merely illustrative and is not intended to limit the scope of the present application. There may be any number of anchor terminals and audience terminals depending on the situation.
The technical scheme of the present application is described below through a plurality of embodiments by taking a server (service platform 2) as an execution subject. It should be understood that these embodiments may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.
Example 1
The embodiment of the application takes a server (a service platform 2) as an execution main body, and exemplarily introduces the wheat-connected video display method provided by the application.
Fig. 2 schematically shows a flowchart of a method for displaying a headset video according to a first embodiment of the application.
As shown in FIG. 2, the method for displaying the wheat-linked video may include steps S200 to S204, wherein:
Step 200, in response to the wheat linking request, obtaining a mixed live stream, wherein the mixed live stream comprises an initial wheat linking picture.
Step S202, transcoding operation is carried out on the mixed live stream, and a live stream is obtained, wherein the live stream carries supplementary enhancement information. The transcoding operation comprises the steps of obtaining the resolution of an initial continuous-wheat picture, comparing the resolution of the initial continuous-wheat picture with a preset resolution, adjusting the initial continuous-wheat picture under the condition that the resolution of the initial continuous-wheat picture is different from the preset resolution so as to obtain a target continuous-wheat picture identical to the preset resolution, wherein the target continuous-wheat picture comprises a target video picture, and updating the supplementary enhancement information through the resolution of the target video picture.
Step S204, the live code stream is pushed to a client, and the client is used for acquiring the target video picture from the live code stream according to the supplementary enhancement information and displaying the target video picture through a preset player.
In the method for displaying the continuous-cast video, when the continuous-cast is live, the mixed-picture live stream comprising the initial continuous-cast picture is obtained, and transcoding operation is carried out, so that the live code stream carrying the supplementary enhancement information is obtained. The transcoding operation includes comparing the resolution of the initial headset picture with a predetermined resolution. Under the condition that the two images are different, the initial continuous-time image is adjusted to obtain a target continuous-time image containing the target video image, and the resolution of the target continuous-time image is the same as the preset resolution. And adding the resolution of the target video picture in the supplemental enhancement information of the live code stream. And pushing the live code stream to the client, so that the client can acquire a target video picture in the live code stream according to the supplementary enhancement information and display the target video picture through the player. It can be known that, in the embodiment of the application, the resolution of the target video picture is added in the SEI (supplemental enhancement information) of the live code stream, so that the player can accurately identify the actual content area and play the actual content area. The method can reduce the problems of picture proportion distortion and black edges in scenes such as horizontal and vertical screen switching, straight turning points and the like, thereby improving the viewing experience of live broadcasting and wheat connecting.
Each of steps S200 to S204 and optional other steps are described in detail below with reference to fig. 2.
Step 200, in response to the wheat linking request, obtaining a mixed live stream, wherein the mixed live stream comprises an initial wheat linking picture.
During the live broadcast process, the anchor terminal (also called anchor terminal) can push the generated live broadcast stream to the server. And the server transcodes the live stream to generate a live stream. The server can push the live code stream to the client, and the client can correspondingly display the live pictures of the anchor.
During the live broadcast process, the anchor can also carry out the wheat-linking interaction with other anchors or audiences. As shown in fig. 3, a host a (which may be referred to as a host) may initiate a barley connection request to a server for a host B (which may be referred to as a barley connection). After receiving the wheat linking request, the server can forward the wheat linking request to the anchor B. If anchor B accepts the request, anchor A and anchor B will enter the wheat-with-mode. In the live-broadcasting mode, live-broadcasting streams of the host A and the host B are simultaneously displayed in a live-broadcasting room, and a spectator can simultaneously see live-broadcasting pictures of the two host-broadcasting, namely, live-broadcasting pictures. It should be noted that the wheat connecting end may include one or more. The following will take a headset terminal as an example to exemplarily describe a headset video display method provided by the embodiment of the application.
In order to show the wheat connecting picture, the server can acquire respective live streams of the main broadcasting end and the wheat connecting end, and mix the two live streams into the same live stream, namely the mixed live stream. The mixed-picture live stream may include an initial comic. The initial linking frame can be generated by processing two live frames of the anchor according to a preset mode. The preset processing mode can comprise cutting, splicing and the like. An exemplary scheme will be provided below.
In an alternative embodiment, as shown in fig. 4, step S200 may include:
Step S300, a first direct broadcast stream of a main broadcasting end and a second direct broadcast stream of a wheat connecting end are obtained.
And step S302, merging the first direct broadcast stream and the second direct broadcast stream to obtain the mixed live broadcast stream. The merging comprises merging the video pictures of the first direct broadcast stream and the video pictures of the second direct broadcast stream to obtain the initial wheat connecting picture.
In practical applications, the server may obtain the first live stream of the anchor (e.g., anchor a) and the second live stream of the anchor (e.g., anchor B) after successful linking. And combining the first direct broadcast stream and the second direct broadcast stream to synthesize a mixed live broadcast stream. There are various methods of merging, such as picture-in-picture, side-by-side/side-by-side display, partial lamination, etc. For example, the video frame of the second live stream may be embedded into the video frame of the first live stream, resulting in an initial wheat-with-wheat frame. In this mode, the video of the host may fill the player window, while the video of the host may be displayed as a small window at a specific location on the player. The initial link picture in this case has the same resolution as the video picture of the first direct current stream, and the resolution is unchanged from that before link. For another example, the video frame of the first main stream and the video frame of the second live stream may be transversely spliced to obtain an initial wheat-connecting frame. In this mode of the initial wheat middley screen, the anchor and the wheat middley are displayed side by side. If the resolution of the video pictures of the first direct stream is 720×640 and the resolution of the video pictures of the second direct stream is 720×640, the resolution of the initial still picture is 1440×640. It can be seen that the resolution of the live stream to be transcoded before and after the link has changed (720×640 before the link, 1440×640 after the link).
In this embodiment, by combining the first direct stream and the second direct stream, video frames from different sources may be combined into an initial continuous-view frame for simultaneous viewing by the viewer.
It is known that the resolution of the initial link (containing the anchor A, B) generated may change relative to the live view before the link (anchor a only). Such resolution changes can lead to player compatibility issues. The following describes the method for displaying a headset video according to the present application by way of examples to overcome the aforementioned problems.
Step S202, transcoding operation is carried out on the mixed live stream, and a live stream is obtained, wherein the live stream carries supplementary enhancement information. The transcoding operation comprises the steps of obtaining the resolution of an initial continuous-wheat picture, comparing the resolution of the initial continuous-wheat picture with a preset resolution, adjusting the initial continuous-wheat picture under the condition that the resolution of the initial continuous-wheat picture is different from the preset resolution so as to obtain a target continuous-wheat picture identical to the preset resolution, wherein the target continuous-wheat picture comprises a target video picture, and updating the supplementary enhancement information through the resolution of the target video picture.
As mentioned before, live links may lead to a change in resolution of the live stream to be transcoded before and after the link, which may lead to player compatibility problems. Thus, in performing transcoding, the resolution of the original still picture can be obtained. And comparing the resolution of the initial wheat connecting picture with a preset resolution (target output resolution). The preset resolution can be set according to the anchor setting, the client performance and the transcoding requirements. For example, the preset resolution may be the same as the resolution of the video pictures of the first direct stream, such as 720×640. If the initial continuous-wheat picture is the same as the preset resolution (picture-in-picture mode), the resolution of the initial continuous-wheat picture is not required to be adjusted, and the initial continuous-wheat picture is determined to be the target continuous-wheat picture to be output. If the initial continuous-wheat picture is different from the preset resolution (side-by-side display, etc.), the initial continuous-wheat picture can be adjusted according to the preset resolution, such as compression, stretching, cutting, etc., so as to obtain the target continuous-wheat picture (720×640) output with the same preset resolution. The target headset picture can comprise a target video picture, and the target video picture corresponds to the adjusted initial headset picture. The resolution of the target video picture is added to the SEI (supplemental enhancement information) of the live bitstream.
In this embodiment, when the resolution is detected to be changed, the initial wheat linking picture is adjusted according to the target output resolution, so as to obtain a target wheat linking picture which can be normally output, and avoid transcoding failure. Meanwhile, the adjusted resolution of the initial continuous-time image, namely the resolution information of the actual content area, is added into the SEI of the live broadcast code stream, and the player can be assisted in judging the resolution transformation of the live broadcast code stream subsequently, so that the actual content area can be accurately identified.
As mentioned above, there are various methods for adjusting the initial wheat linking picture. An exemplary scheme will be provided below.
In an alternative embodiment, as shown in fig. 5, step S202 may include:
step S400, scaling the initial continuous-view frame according to the preset resolution ratio, so as to obtain the target video frame.
Step S402, setting a black area at the edge of the zoomed initial wheat connecting picture to obtain the target wheat connecting picture. The target wheat-connected picture comprises the scaled target video picture and the black edge area.
For example, the predetermined resolution may be 720×640, and the resolution of the initial frame is 1440×640. It can be seen that the resolution of the initial comic is greater than the preset resolution, so that the initial comic can be scaled. For example, the initial comic compression may be scaled to 720 x 320 and the distortion of the picture scale may be avoided. Because the resolution after compression is different from the preset resolution, black edges can be respectively added on the upper part and the lower part of the initial wheat connecting picture after compression to form black edge areas, and the resolution of each black edge area can be 720 multiplied by 160. And combining the initial continuous-wheat picture and the black area to obtain the target continuous-wheat picture.
In this embodiment, by filling black edges, a target wheat-connected picture conforming to the target output resolution can be obtained, and the integrity and correct display proportion of video content can be maintained, so as to avoid distortion of the picture due to stretching and cutting.
Step S204, the live code stream is pushed to a client, and the client is used for acquiring the target video picture from the live code stream according to the supplementary enhancement information and displaying the target video picture through a preset player.
The client may be a anchor, a headset, or a viewer terminal. The server can push the live code stream to the client and transparently transmit resolution information given in SEI carried by the live code stream. After receiving the live code stream, a player of the client can analyze the SEI to obtain the resolution information of the target video picture. The player may first determine the current display mode (portrait mode or landscape mode). If the video is in the horizontal screen mode, a target video picture (an actual content area) can be identified and extracted from a target continuous video picture of the live broadcast code stream according to the resolution information, and the video is played and displayed, so that the influence of displaying a black edge area on the experience of a viewer during horizontal screen is avoided. Of course, the player can also directly display the target wheat-connected picture, and the picture cannot be distorted due to the fact that black edges are filled in advance. The vertical screen mode can only play the actual content area according to the actual requirement, or directly play the target wheat-with-wheat picture, and is not limited herein.
In this embodiment, by analyzing the SEI (supplemental enhancement information) of the live broadcast code stream, the resolution of the target video picture is obtained, so that the player can accurately identify the actual content area and play the actual content area, and the problems of picture proportion distortion and black edge are reduced, so that the live broadcast video picture is suitable for a horizontal-vertical screen switching scene, and the viewing experience of live broadcast link is effectively improved.
The above embodiment describes a method for displaying a headset video, and further exemplary descriptions will be given below for displaying a video of a scene such as a headset disconnection, a straight-through point, etc.
In an alternative embodiment, as shown in fig. 6, the method for displaying a continuous-time video may further include:
Step S500, responding to the wheat connecting disconnection request, and acquiring the latest first direct-current stream.
Step S502, performing transcoding on the latest first direct-broadcasting stream, and updating the supplemental enhancement information according to the resolution of the video picture of the latest first direct-broadcasting stream, so as to obtain the latest direct-broadcasting code stream, where the latest direct-broadcasting code stream carries the updated supplemental enhancement information.
Step S504, pushing the latest live code stream to the client, wherein the client is further used for acquiring the video picture of the latest first direct-current stream from the latest live code stream according to the updated supplemental enhancement information and displaying the video picture through a preset player.
In practical application, if the main broadcasting end or the wheat connecting end disconnects the wheat connecting end, the server can acquire the latest live stream of the main broadcasting end and execute transcoding operation to obtain the latest live code stream. Meanwhile, the SEI can be refreshed according to the latest video picture resolution of the first direct-current stream, and the refreshed SEI is added into the direct-current code stream. And pushing the latest live code stream to the client, and the client can acquire the video picture of the latest first direct stream by analyzing the updated SEI and display the video picture through the player.
In this embodiment, after the connection is disconnected, the server may return to the initial state from the merging state, that is, re-acquire the live stream of the main broadcasting end to transcode and refresh the SEI, so as to obtain the latest live code stream. Accordingly, the client can quickly acquire the video picture of the anchor according to the SEI and display the video picture through the player. It can be known that the transcoding process of the embodiment can quickly respond to the resolution change before and after the link, and further improve the live broadcast viewing experience.
In an alternative embodiment, as shown in fig. 7, the method for displaying a continuous-time video may further include:
And step S600, recording the live code stream and the supplementary enhancement information to generate an on-demand file.
Step S602, in response to the on-demand request, pushes the on-demand file to the client.
In order to realize the on-demand service, the server can record the live code stream in real time and synchronously record the supplementary enhancement information in the live broadcast process so as to record the resolution change condition and the actual content area, thereby being convenient for providing richer viewing experience during on-demand. After the recording is completed, all the data are packaged into an on-demand file. The on-demand file may be a single media file (e.g., MP 4) or a directory structure including a plurality of files (e.g., M3U8 and related TS segments). In the packaging process, the server can correlate the live code stream with the supplemental enhancement information so as to ensure accurate presentation during playback. In response to the request, the server can push the request file to the client, the client analyzes the SEI in the request file, and the client can play the actual content area according to the resolution information therein and ignore the black edge.
In this embodiment, resolution information of an actual content area is added to an SEI of a live code stream, and the live code stream and the SEI are recorded synchronously to generate an on-demand file, so that a client can accurately identify and play the actual content area according to the SEI during on-demand, reduce the problems of picture proportional distortion and black edges, be compatible with a direct-point scene, and further improve on-demand viewing experience.
Example two
The embodiment of the application takes a client (a spectator terminal and a host) as an execution main body, and exemplarily introduces the method for displaying the wheat-linked video.
Fig. 8 schematically shows a flowchart of a method for displaying a headset video according to a second embodiment of the application.
As shown in FIG. 8, the method for displaying the wheat-linked video may include steps S700 to S704, wherein:
Step S700, a live code stream or an on-demand file carrying supplemental enhancement information provided by a server is obtained, where the live code stream or the on-demand file includes an original video picture, the original video picture includes a target video picture and a black border area, and the supplemental enhancement information includes a resolution of the target video picture.
Step S702, adjusting the original video frame according to the supplemental enhancement information to obtain the target video frame.
Step S704, displaying the target video frame by a preset player.
According to the method for displaying the continuous-cast video, the client can acquire the live code stream or the on-demand file carrying the supplementary enhancement information provided by the server. And analyzing the supplementary enhancement information to obtain the resolution information of the target video picture. Based on the resolution information, the client may adjust the original video frame of the live bitstream or the on-demand file to obtain a target video frame (i.e., an actual content area). And displaying the target video picture through the player. It can be known that, in the embodiment of the application, the player can accurately identify the actual content area and play the actual content area by acquiring the resolution information of the target video picture in the SEI (supplemental enhancement information) carried by the live code stream. The method can reduce the problems of picture proportion distortion and black edge in scenes such as horizontal and vertical screen switching, straight turning points and the like, thereby improving the viewing experience of live broadcasting and wheat linking or on demand.
There are various methods for adjusting the original video picture, such as clipping, compression, stretching, etc. An exemplary scheme is provided below.
In an alternative embodiment, step S702 may include determining a resolution of the target video picture based on the supplemental enhancement information. And identifying and extracting the target video picture in the original video picture according to the resolution ratio of the target video picture.
Illustratively, the client may obtain the resolution of the target video picture by parsing the supplemental enhancement information. The target video picture is the actual content area. The target video picture and the black border region in the original video picture are identified based on the resolution of the target video picture. Clipping black border areas of the original video picture or extracting target video picture from the original video picture.
In this embodiment, the actual content area can be accurately extracted from the live code stream or the on-demand file through the resolution information, and the black edge is ignored, so that the viewing experience is effectively improved.
Example III
Fig. 9 schematically shows a block diagram of a communication video display apparatus according to a third embodiment of the present application, which may be used as a server, may be divided into one or more program modules, which are stored in a storage medium and executed by one or more processors to complete the embodiment of the present application. Program modules in accordance with the embodiments of the present application are directed to a series of computer program instruction segments capable of performing the specified functions, and the following description describes each program module in detail. As shown in fig. 9, the apparatus 1000 may include an acquisition module 1100, a transcoding module 1200, and a pushing module 1300, where:
An obtaining module 1100, configured to obtain a live mixed-picture stream in response to a wheat linking request, where the live mixed-picture stream includes an initial wheat linking picture;
The transcoding module 1200 is configured to perform a transcoding operation on the live stream with mixed pictures to obtain a live code stream, where the live code stream carries supplemental enhancement information;
The transcoding operation comprises the steps of obtaining the resolution of the initial continuous-wheat picture, comparing the resolution of the initial continuous-wheat picture with the preset resolution, and adjusting the initial continuous-wheat picture to obtain a target continuous-wheat picture which is the same as the preset resolution under the condition that the resolution of the initial continuous-wheat picture is different from the preset resolution, wherein the target continuous-wheat picture comprises a target video picture;
And the pushing module 1300 is configured to push the live code stream to a client, where the client is configured to obtain the target video frame from the live code stream according to the supplemental enhancement information, and display the target video frame through a preset player.
As an optional embodiment, the acquiring the live mixed-picture stream includes:
acquiring a first direct broadcast stream of a main broadcasting end and a second direct broadcast stream of a wheat connecting end;
Combining the first direct broadcast stream and the second direct broadcast stream to obtain the mixed live broadcast stream;
the merging comprises merging the video pictures of the first direct broadcast stream and the video pictures of the second direct broadcast stream to obtain the initial wheat connecting picture.
As an alternative embodiment, the headset video display device 1000 is further configured to:
responding to a wheat connecting disconnection request, and acquiring the latest first direct-current stream;
Transcoding is carried out on the latest first direct-broadcasting stream, and the supplemental enhancement information is updated according to the resolution of the video picture of the latest first direct-broadcasting stream, so that the latest direct-broadcasting code stream is obtained, and the latest direct-broadcasting code stream carries the updated supplemental enhancement information;
and pushing the latest live code stream to the client, wherein the client is further used for acquiring the video picture of the latest first direct-current stream from the latest live code stream according to the updated supplemental enhancement information and displaying the video picture through a preset player.
As an optional embodiment, the adjusting the initial wheat connecting picture includes:
scaling the initial wheat connecting picture according to the preset resolution to obtain the target video picture;
setting a black edge area at the edge of the zoomed initial wheat connecting picture so as to obtain the target wheat connecting picture;
the target wheat-connected picture comprises the scaled target video picture and the black edge area.
As an alternative embodiment, the headset video display device 1000 is further configured to:
Recording the live code stream and the supplementary enhancement information to generate an on-demand file;
and pushing the on-demand file to the client in response to the on-demand request.
Example IV
Fig. 10 schematically shows a block diagram of a communication video display apparatus according to a fourth embodiment of the present application, which may be used for a client, may be divided into one or more program modules, which are stored in a storage medium and executed by one or more processors to complete the embodiment of the present application. Program modules in accordance with the embodiments of the present application are directed to a series of computer program instruction segments capable of performing the specified functions, and the following description describes each program module in detail. As shown in fig. 10, the apparatus 1000 may include a first acquisition module 2100, a second acquisition module 2200, and a play module 2300, wherein:
A first obtaining module 2100, configured to obtain a live code stream or an on-demand file provided by a server and carrying supplemental enhancement information, where the live code stream or the on-demand file includes an original video frame, the original video frame includes a target video frame and a black border region, and the supplemental enhancement information includes a resolution of the target video frame;
a second obtaining module 2200, configured to adjust the original video frame according to the supplemental enhancement information, so as to obtain the target video frame;
the playing module 2300 is configured to display the target video frame through a preset player.
As an alternative embodiment, adjusting the original video frame according to the supplemental enhancement information to obtain the target video frame includes:
determining the resolution of the target video picture according to the supplementary enhancement information;
And identifying and extracting the target video picture in the original video picture according to the resolution ratio of the target video picture.
Example five
Fig. 11 schematically shows a hardware architecture diagram of a computer device 10000 suitable for implementing a method for displaying a headset video according to a third embodiment of the present application. In some embodiments, computer device 10000 may be a smart phone, a wearable device, a tablet, a personal computer, a vehicle terminal, a gaming machine, a virtual device, a workstation, a digital assistant, a set top box, a robot, or the like. In other embodiments, the computer device 10000 may be a rack server, a blade server, a tower server, or a rack server (including a stand-alone server, or a server cluster composed of multiple servers), or the like. As shown in fig. 11, the computer device 10000 includes, but is not limited to, a memory 10010, a processor 10020, and a network interface 10030, which can be communicatively linked to each other through a system bus. Wherein:
Memory 10010 includes at least one type of computer-readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory), random Access Memory (RAM), static Random Access Memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like. In some embodiments, memory 10010 may be an internal storage module of computer device 10000, such as a hard disk or memory of computer device 10000. In other embodiments, the memory 10010 may also be an external storage device of the computer device 10000, such as a plug-in hard disk provided on the computer device 10000, a smart memory card (SMARTMEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Of course, the memory 10010 may also include both an internal memory module of the computer device 10000 and an external memory device thereof. In this embodiment, the memory 10010 is typically used for storing an operating system installed on the computer device 10000 and various application software, such as program codes of a communication video display method. In addition, the memory 10010 may be used to temporarily store various types of data that have been output or are to be output.
The processor 10020 may be a central processing unit (Centra lProcessing Unit, CPU), controller, microcontroller, microprocessor, or other chip in some embodiments. The processor 10020 is typically configured to control overall operation of the computer device 10000, such as performing control and processing related to data interaction or communication with the computer device 10000. In this embodiment, the processor 10020 is configured to execute program codes or process data stored in the memory 10010.
The network interface 10030 may comprise a wireless network interface or a wired network interface, which network interface 10030 is typically used to establish a communication link between the computer device 10000 and other computer devices. For example, the network interface 10030 is used to connect the computer device 10000 to an external terminal through a network, establish a data transmission channel and a communication link between the computer device 10000 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a global system for mobile communications (Globa lSystem of Mobile communication, abbreviated as GSM), wideband code division multiple access (Wideband Code Divisi on Multiple Access, abbreviated as WCDMA), a 4G network, a 5G network, bluetooth (Bluetooth), wi-Fi, etc.
It should be noted that fig. 11 only shows a computer device having components 10010-10030, but it should be understood that not all of the illustrated components are required to be implemented, and that more or fewer components may be implemented instead.
In this embodiment, the method for displaying a communication video stored in the memory 10010 may be further divided into one or more program modules and executed by one or more processors (e.g., the processor 10020) to perform an embodiment of the application.
Example six
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program is executed by a processor to realize the steps of the method for displaying the wheat connecting video in the embodiment.
In this embodiment, the computer-readable storage medium includes a flash memory, a hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPR OM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the computer readable storage medium may be an internal storage unit of a computer device, such as a hard disk or a memory of the computer device. In other embodiments, the computer readable storage medium may also be an external storage device of a computer device, such as a plug-in hard disk, a smart memory card (SMARTMEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FL ASH CARD), or the like, provided on the computer device. Of course, the computer-readable storage medium may also include both internal storage units of a computer device and external storage devices. In this embodiment, the computer readable storage medium is typically used to store an operating system and various application software installed on the computer device, such as program codes of the communication video display method in the embodiment. Furthermore, the computer-readable storage medium may also be used to temporarily store various types of data that have been output or are to be output.
Example seven
The present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the method of the above embodiments.
It will be apparent to those skilled in the art that the modules or steps of the embodiments of the application described above may be implemented in a general purpose computer device, they may be concentrated on a single computer device, or distributed over a network of multiple computer devices, they may alternatively be implemented in program code executable by a computer device, so that they may be stored in a storage device for execution by the computer device, and in some cases, the steps shown or described may be performed in a different order than what is shown or described, or they may be separately made into individual integrated circuit modules, or a plurality of modules or steps in them may be made into a single integrated circuit module. Thus, embodiments of the application are not limited to any specific combination of hardware and software.
It should be noted that the foregoing is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes using the descriptions of the present application and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the present application.