Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a particular ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a particular order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
In the present embodiment, the term "module" or "unit" refers to a computer program or a part of a computer program having a predetermined function and working together with other relevant parts to achieve a predetermined object, and may be implemented in whole or in part by using software, hardware (such as a processing circuit or a memory), or a combination thereof. Also, a processor (or multiple processors or memories) may be used to implement one or more modules or units. Furthermore, each module or unit may be part of an overall module or unit that incorporates the functionality of the module or unit.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the embodiments of the application is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
In the embodiment of the application, the relevant data collection processing should be strictly according to the requirements of relevant laws and regulations when the example is applied, so as to acquire the informed consent or independent consent of the personal information body, and develop the subsequent data use and processing within the authorized range of the laws and regulations and the personal information body.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) Web Application (Web Application), abbreviated as Web Application, refers to an Application program built based on Web technology, which may be hypertext markup language (HyperText Markup Language, HTML), cascading style sheets (CASCADING STYLE SHEETS, CSS), javaScript, accessed and used by a user through a browser or Web view (WebView), and can be run without being installed on a local device. Web applications are a lightweight, cross-platform form of application with interactions and functionality similar to desktop or mobile applications. The webpage application is suitable for various scenes such as online service, electronic commerce, media and the like.
2) The immersion flow is a content display form which takes the content as a core and focuses on the immersion experience, and a user can focus on single content usually through a smooth interaction design and a full screen display mode and gradually browse the next content through sliding, scrolling and the like. The immersion flow is a single-screen focused, linearly browsed content flow presentation.
The immersion flow is a user experience design mode aimed at immersing the user in the experience of a single content through full-screen, interference-free content presentation and sliding switching. Immersion streaming is common in short video platforms or scenes with deep participation as a goal, such as graphics, news, product presentations, etc.
The presentation form of the immersion flow comprises the steps of presenting only one content unit (such as a video, an article or a picture) at a time, naturally switching to the next content by sliding or scrolling by a user, and enhancing the immersion sense through the seamless animation and transition effect. The immersion flow has the characteristics of single-screen focusing, smooth switching, and strong interaction design.
3) Rich media Content (RICH MEDIA Content) refers to Content that contains multiple forms of media, including text, pictures, audio, video, animation, interactive elements, and the like. The rich media content aims to enhance the user experience, attract the attention of the user and promote the participation degree through diversified expression modes.
The rich media content has the characteristics of first characteristic and diversity, and provides more vivid expression by combining multimedia forms such as video, audio, animation, characters and the like. And secondly, interactivity, which allows a user to interact with the content (such as clicking, sliding, playing and pausing). And thirdly, immersing, namely providing more attractive and immersed content experience through rich media technology.
The application scene of the rich media content comprises scene 1, and the video advertisement comprises an advertisement with interaction options when playing. Scene 2, e-commerce, video and pictures containing 360 DEG commodity display. Scene three, dynamic news, namely news report with video, pictures and animation.
4) A Web page (Web) component is an independent functional module that constitutes a Web page or Web application for encapsulating specific interaction or display logic. The components are typically reusable, customizable, and can be used in different web pages or applications.
The web page components are characterized by modularization, namely the components are usually packaged in independent HTML, CSS and JavaScript, are easy to multiplex, and are flexible, namely the behavior can be customized through parameters or events. Isolation-component style and logic independence, avoiding conflicts with other components.
The application scene of the webpage component comprises a form component, a navigation bar, a picture carousel, a data table and a data table, wherein the form component comprises an input frame, a button and a selection frame, the navigation bar comprises a top navigation menu and a side bar menu, the picture carousel comprises an automatic play and user-interactable picture switching component, and the data table comprises a table with paging, sorting and filtering functions.
5) A player is a tool or component for playing audio, video or streaming media content. The player provides play control (e.g., play, pause, volume adjustment) and rendering of video or audio files.
The player has the characteristics of supporting common media formats (such as MP4, MP3 and WebM), playing control, including functions of providing play, pause, dragging progress bars, adjusting volume and the like, and expanding functions including subtitle support, double-speed play, advertisement insertion, definition switching and the like.
The application scene of the player comprises online video playing, such as a video website, streaming live streaming, real-time video playing of protocols such as supporting real-time streaming media transmission (HTTP LIVE STREAMING, HLS) of a hypertext transfer protocol, dynamic adaptive streaming media transmission (DYNAMIC ADAPTIVE STREAMING over HTTP, DASH) based on the hypertext transfer protocol and the like, music playing, such as a music platform, and embedded content, such as audio and video content in an article.
6) A slide/Swiper component is a user interface component that supports slide interactions, often used to present a set of pictures, videos, or content units, which a user can view through a slide switch. The sliding component has the characteristics of interactivity that a user can switch contents by touching a screen or sliding a mouse, dynamic switching that automatic multicasting or manual switching is supported and has transitional animation effect, and flexible layout that the sliding direction can be horizontal or vertical.
The application scene of the sliding component comprises: short video stream, switch video by sliding up and down; the picture carousel comprises commodity or promotion picture display of the home page of the e-commerce platform, picture-text cards, product display, and left-right switching browsing of commodity pictures in the e-commerce page, wherein the picture-text cards display news, article abstract or data.
In order to better understand the video playing method provided by the embodiment of the present application, first, a video playing method and existing technical problems in the related art are described.
In the related art, it is assumed that an application 1 receives a video forwarding message, where a video to be played in the video forwarding message is from an application 2, and then in the application 1, the video to be played is played by clicking the video forwarding message, entering a web page application interface, and clicking a play start control in the web page application interface, and only the operation of closing the video play can be performed in the playing process, but other operations cannot be performed. And when the webpage application is used for playing the video to be played, the playing interface is limited, and the video presentation effect is poor. If other videos need to be played, the page needs to be slid, a playing inlet of the other videos is displayed, then the playing inlet is clicked to play the other videos, the operation is complex, and the video playing efficiency is low.
As can be seen from the above related art, the disadvantages of the related art include at least a first low interactivity and a single function during the video playing process, and a second low video playing efficiency.
Embodiments of the present application provide a video playing method, apparatus, computer device, computer readable storage medium, and computer program product, which can improve interactivity of video playing, and an exemplary application of the computer device provided by the embodiments of the present application is described below, where the computer device provided by the embodiments of the present application may be implemented as a notebook computer or a tablet computer, various types of terminals such as desktop computers, set-top boxes, smart phones, smart speakers, smart watches, smart televisions, vehicle-mounted terminals, robots, unmanned aerial vehicles, medical devices, intelligent wearable devices, and the like, can also be implemented as servers, or a combination of the two. In the following, an exemplary application when the computer device is implemented as a terminal will be described.
Referring to fig. 1, fig. 1 is a schematic diagram of a network architecture of a video playing system 100 according to an embodiment of the present application, in order to support a video playing application, a server 200 and a terminal 400-1 are connected to a terminal 400-2 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two.
The terminal 400-1 plays a first video by using a second application and shares the first video to the terminal 400-2 in the form of a first application message, the terminal 400-2 is used for displaying a first control in a display interface of the first application, the first control is used for playing the first video by using a web application, the first video is from a second application, the first application and the second application are different from the web application, a first playing request carrying a first video identifier is generated by receiving a playing instruction aiming at the first video based on the first control, the first playing request is sent to the server 200, the server 200 application acquires the first video based on the first playing request and returns the first video to the terminal 400-2, the terminal 400-2 plays the first video by using the web application, the second playing request carrying a second video identifier is generated by responding to a first interactive operation aiming at the first video, the server 200 application acquires the second video based on the second playing request and returns the second video to the terminal 400-2, and thus the terminal 400-2 acquires the second video by using the second interactive operation of the second video.
In the embodiment of the present application, the terminal 400-2 displays a first control in a display interface of a first application, where the first control is a control for playing a first video through a web application, and the first video is from a second application, where the first application and the second application are different from the web application, and the first application is also different from the second application; when a playing instruction aiming at a first video is received based on a first control, the first video is played by using a webpage application, in the process of playing the first video, a second video corresponding to the first interactive operation can be determined in response to the first interactive operation aiming at the first video, and the second video is played by using the webpage application, namely, the switching of the played video is realized through the first interactive operation, so that the convenience of switching the video is improved, the playing efficiency of playing videos from other non-webpage applications by using the webpage application can be improved, the video switching function is enriched, and the interactivity in the video playing process is also improved.
In some embodiments, the server 200 may be a stand-alone physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
Referring to fig. 2, fig. 2 is a schematic diagram of a structure of a terminal 400-2 according to an embodiment of the present application, and the terminal 400-2 shown in fig. 2 includes at least one processor 410, a memory 450, at least one network interface 420, and a user interface 430. The various components in terminal 400-2 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable connected communication between these components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 2 as bus system 440.
The Processor 410 may be an integrated circuit chip having signal processing capabilities such as a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc., where the general purpose Processor may be a microprocessor or any conventional Processor, etc.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable presentation of the media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
Memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 450 optionally includes one or more storage devices physically remote from processor 410.
Memory 450 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The non-volatile Memory may be a Read Only Memory (ROM) and the volatile Memory may be a random access Memory (Random Access Memory, RAM). The memory 450 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 451 including system programs, e.g., framework layer, core library layer, driver layer, etc., for handling various basic system services and performing hardware-related tasks, for implementing various basic services and handling hardware-based tasks;
A network communication module 452 for accessing other electronic devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 including bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (Universal Serial Bus, USB), etc.;
A presentation module 453 for enabling presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information) via one or more output devices 431 (e.g., a display screen, speakers, etc.) associated with the user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided by the embodiments of the present application may be implemented in software, and fig. 2 shows a video playing apparatus 455 stored in a memory 450, which may be software in the form of a program, a plug-in, etc., including software modules including a first display module 4551, a first playing module 4552, a first determining module 4553, and a second playing module 4554, which are logical, and thus may be arbitrarily combined or further split according to the implemented functions. The functions of the respective modules will be described hereinafter.
In other embodiments, the apparatus provided by the embodiments of the present application may be implemented in hardware, and by way of example, the apparatus provided by the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the video playing method provided by the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable logic devices (Programmable Logic Device, PLDs), complex Programmable logic devices (Complex Programmable Logic Device, CPLDs), field-Programmable gate arrays (Field-Programmable GATE ARRAY, FPGA), or other electronic components.
In some embodiments, the terminal may implement the video playing method provided by the embodiments of the present application by running various computer executable instructions or computer programs. For example, the computer-executable instructions may be commands at the micro-program level, machine instructions, or software instructions. The computer program can be a Native program or a software module in an operating system, a local (Native) application program (APPlication, APP) which is a program which needs to be installed in the operating system to run, such as an instant messaging APP and an information APP, or an applet which can be embedded in any APP, which is a program which can be run only by being downloaded to a browser environment. In general, the computer-executable instructions may be any form of instructions and the computer program may be any form of application, module, or plug-in.
The video playing method provided by the embodiment of the application will be described in connection with the exemplary application and implementation of the terminal provided by the embodiment of the application.
It should be noted that, in the following examples of the video playing method, the first video is taken as a short video as an example, and those skilled in the art may apply the video playing method provided in the embodiment of the present application to video playing of other types of videos including advisory video, advertisement, movie video, live video, promotional video, and the like according to understanding of the following. Embodiments of the application may also be applied to a variety of scenarios including, but not limited to, digital media, social media, education and training, intelligent transportation, assisted driving, and the like.
Referring to fig. 3, fig. 3 is a schematic flow chart of a video playing method according to an embodiment of the present application, and the video playing method according to the embodiment of the present application will be described with reference to the steps shown in fig. 3, where the method may be executed by a computer device, and in the embodiment of the present application, the description is given taking the computer device as an example.
In step S101, a first control is displayed in a display interface of a first application.
In the embodiment of the application, the first control is an entry control for playing a first video through a web application, the first video is from a second application, the first application and the second application are different from the web application, and the first application and the second application are also different.
In some embodiments, the first application and the second application may be two different applications of the same type, e.g., the first application and the second application may be different instant messaging applications, and the first application and the second application may also be different short video applications. The first application and the second application may also be two applications of different types, for example, the first application is an instant messaging application and the second application is a short video application, or the first application is a short video application and the second application is an instant messaging application.
In some embodiments, the first application may be capable of receiving a sharing message (or a link message) forwarded by the second application, where the sharing message may carry a thumbnail or a text link of the first video, where an area where the thumbnail or the text link is located is a first control, where the first control supports at least one of a touch operation, a voice operation, a gesture operation, and a gesture operation, where the touch operation may be any one of a single click operation, a long press operation, a double click operation, for example. That is, in the step S101, after the first application of the terminal receives the sharing message, the first control is displayed in the display interface.
In step S102, a play instruction for the first video is received based on the first control, and the first video is played by using the web application.
In some embodiments, when a touch operation is received for example, it may be considered that a video playing instruction for the first video is received, a web application is executed, and the first video is played by using the web application, which can realize automatic playing of the first video.
In other embodiments, when a touch operation for the first control is received, the web application is operated, the thumbnail of the first video is displayed by using the web application, then when the touch operation for the thumbnail of the first video is received in the web application, the video playing instruction for the first video can be considered to be received, and then the first video is played by using the web application, so that touch playing of the first video can be realized.
In some embodiments, the implementation process of playing the first video by using the web application in the step S102 may include calling a preset player by using the web application, displaying a playing interface of the preset player in the display interface, where a screen ratio of the playing interface is greater than a ratio threshold, and playing the first video on the playing interface by using the preset player.
In some embodiments, the preset player may be a third party player outside of the web application, for example, the preset player may be a player with strong compatibility, high playing quality, good stability, easy use, rich functions, low latency, scalability, high security, and low resource occupation.
Illustratively, the preset player can support a variety of audio and video formats, including but not limited to common video formats such as mp4, avi, mkv, and the like, and audio formats such as mp3, wav, g711, and the like. The preset player supports high-resolution playing and ensures the watching experience of the user. The preset player is not easy to be blocked, down or crashed in the playing process. The preset player interface is visual, the operation is simple and convenient, and the user can use the preset player without additional learning cost. The preset player has functions such as fast forward, fast backward, screenshot, full screen, picture-in-picture, subtitle selection, play speed adjustment and the like.
In some embodiments, when the web application is running, the preset player is called, the preset player is started, the first video (or the first video link) can be transferred to the preset player, and then parameters such as the starting time and the subtitle path are added to control information of the preset player, so that the preset player is called.
In some embodiments, referring to fig. 4, a play interface 1022 may be displayed in the display interface 1021. The screen ratio of the playing interface refers to the proportion of the screen occupied by the playing interface, namely the ratio of the first area of the playing interface to the second area of the screen. The duty ratio threshold is a value set in advance according to experience, and in order to realize that the first video is played in the form of an immersion stream, that is, to realize that the first video is played on a large screen or a full screen, for example, the duty ratio threshold may be 90%, 95%, 98%, or the like.
The ratio of the first area of the playing interface to the second area of the screen is greater than 95% on the assumption that the threshold of the duty ratio is 95%, and based on this, when the first video is played in the playing interface, the first video can be played in a full screen or large screen mode, so that the visual effect of the first video is improved, the visibility of video content is improved, and the interference of interface elements is reduced.
In some embodiments, in the process of playing the first video by using the webpage application, operations of pausing, continuing to play and replaying the first video can be further performed, namely, the operations of pausing to play the first video in response to a pause instruction for the first video, displaying at least one of a play control and a replay control in a play interface, and after pausing to play the first video, further performing at least one of the operations of continuing to play the first video in response to a trigger operation for the play control and replaying the first video in response to a trigger operation for the replay control can be performed.
In some embodiments, the pause instruction can be at least one of a touch instruction, a voice instruction, a gesture instruction and a gesture instruction, wherein the pause instruction is used as the touch instruction, and for example, the pause instruction can be a click operation for the first video.
In other embodiments, a pause control may also be displayed in the playback interface, based on which the pause instruction may be a touch operation for the pause control, where the touch operation for the pause control may be a single click operation, a long press operation, or a double click operation for the pause control.
With the above example, when a click operation for the first video is received, it is determined that a pause instruction for the first video is received, and playback of the first video is paused.
In some embodiments, referring to fig. 5, after the first video is paused, a play control 501 and a replay control 502 may be displayed in the play interface 1022.
In some embodiments, the triggering operation for the play control may be a single click operation, a long press operation, or a double click operation for the play control, and similarly, the triggering operation for the replay control may be a single click operation, a long press operation, or a double click operation for the replay control.
For example, when the trigger operation for the play control is a click operation for the play control, for example, it is determined that the trigger operation for the play control is received, then the first video is continuously played, that is, the first video frame is continuously played after the first video frame, where the first video frame refers to a video frame displayed by the play interface when the pause instruction is received.
For example, when the trigger operation for the replay control is a click operation for the replay control, for example, it is determined that the trigger operation for the replay control is received, the first video is replayed, that is, the first video is replayed from a third video frame, where the third video frame refers to a video frame with earliest time information in the first video, and the third video frame may also be considered as the first video frame of the first video.
In some embodiments, after the "pause playing the first video", the method may further include obtaining a first video frame, and displaying the first video frame on the playing interface until a trigger operation for a playing control or a trigger operation for a replay control is received, where the first video frame is a video frame displayed on the playing interface when a pause instruction is received.
In some embodiments, the display brightness of the playing interface may be reduced, and then the first video frame may be displayed in the playing interface, or the pause interface may be displayed in the playing interface in the form of a floating layer or a popup window, and the first video frame may be displayed in the pause interface. Illustratively, assuming that a pause instruction for the first video is received while playing the 50 th video frame of the first video, the 50 th video frame is determined as the first video frame.
In some embodiments, when the first video frame is displayed in the playing interface, if a trigger operation for the playing control is received, the display brightness of the playing interface is restored, and the playing interface is utilized to continue playing the first video, that is, the first video frame is taken as a starting frame to continue playing the first video.
In other embodiments, when the first video frame is displayed in the pause interface, if a trigger operation for the play control is received, the pause interface is canceled from being displayed, and the first video is continued to be played in the play interface.
In other embodiments, after "pause playing the first video" described above, the method may also be performed by acquiring a preset image and displaying the preset image on the playing interface until a trigger operation for the playing control or a trigger operation for the replay control is received.
In some embodiments, the implementation process of displaying the preset image on the playing interface is similar to the implementation process of displaying the first video frame on the playing interface, and the difference is that one of the preset images is displayed, and the other of the preset images is displayed on the first video frame, that is, the content of the first video frame is different, and the preset image is an image preset in advance, for example, the preset image may be a cover image of the first video, and the preset image may also be a user-defined image, so the implementation process of displaying the preset image on the playing interface may refer to the implementation process of displaying the first video frame on the playing interface.
In some embodiments, the video playing speed can be set during the process of playing the first video by the webpage application, namely, during the process of playing the first video by the webpage application, the method can further comprise the steps of responding to the double-speed adjustment instruction for the first video, displaying at least two candidate double speeds, responding to the selection operation for the candidate double speeds, determining the selected candidate double speeds as target double speeds, and playing the first video according to the target double speeds in a playing interface.
In some embodiments, similar to the pause command, the double adjustment command may be at least one of a touch command, a voice command, a gesture command, and the double adjustment command is a touch command, for example, the double adjustment command may be a two-touch operation or a three-click operation for the first video.
In other embodiments, a double-adjustment control may be displayed in the playing interface, based on which the double-adjustment instruction may be a touch operation for the double-adjustment control, where the touch operation for the double-adjustment control may be a single-click operation, a long-press operation, or a double-click operation for the double-adjustment control.
For example, when a two-point touch operation for a first video is received, the double-speed adjustment instruction is determined to be received, at least two candidate double speeds, such as four candidate double speeds of '0.5×', '1.0×', '1.5×', and '2.0×' are displayed on a playing interface, when a single-click operation for the candidate double speed of '1.5×' is received, the selection operation for the candidate double speed is determined to be received, the '1.5×' is determined to be a target double speed, i.e. the 1.5 double speed is determined to be a target double speed, and finally, video frames in the first video are switched at 1.5 times of a preset speed in the playing interface, i.e. the first video is played at 1.5 times speed.
In some embodiments, through the pause, play, replay and double-speed setting, a user can freely grasp the play progress and the viewing speed, meet diversified requirements, realize flexible play control, improve efficiency through double-speed setting, facilitate the user to quickly acquire video content and provide personalized viewing experience, and have complete basic functions and visual operation, so that the user experience is improved.
With continued reference to fig. 3, the above step S102 is followed.
In step S103, in response to the first interactive operation for the first video, a second video corresponding to the first interactive operation is determined.
In some embodiments, the first interactive operation may be one of a sliding operation, a drag operation, a voice operation, a gesture operation. When the first interactive operation is a sliding operation, the first interactive operation can slide up and down or slide left and right, when the first interactive operation is a voice operation, the first interactive operation can be voice "cut up", "cut down", and the like, and when the first interactive operation is a gesture operation, the first interactive operation can be turning left or lifting up a hand.
In some embodiments, the first interactive operation corresponds to a second video, i.e., a different first interactive operation corresponds to a different second video.
When the first interactive operation is up-and-down sliding, for example, the last video of the first video is determined to be the second video in the up-and-down sliding operation, and when the first interactive operation is down-and-down sliding, the next video of the first video is determined to be the second video. Wherein, the last video (or the next video) of the first video refers to a video located before the first video (or a video located after the first video) in the first cache list.
In some embodiments, referring to fig. 6, the implementation process of "determining the second video corresponding to the first interactive operation" in the above step S103 may include the following steps S1031 to S1033, which are described in detail below.
In step S1031, a first cache list is acquired.
In the embodiment of the application, at least a first video, a third video to be recommended and a fourth video are stored in the first cache list. The first cache list may store a first video, a third video to be recommended, and a fourth video, and the first cache list may also store the first video, the third video to be recommended, the fourth video, and the tenth video.
In some embodiments, the first cache list stores video that has been cached locally, the first cache list stores at least a first video, and at least three videos. In some embodiments, the number of videos stored in the first cache list may also be 4, 5, etc. The number of videos stored in the first cache list does not exceed a number threshold, which is a value set in advance according to experience, in consideration of the resource occupation situation, and may be 8, 9, 10, or the like, for example.
In step S1032, a first interaction direction of the first interaction operation is determined.
In some embodiments, a start coordinate of a start action point of the first interaction and an end coordinate of an end action point of the first interaction may be obtained, and the first interaction direction may be determined based on the start coordinate and the end coordinate.
Illustratively, assuming a start coordinate of (2, 1) and an end coordinate of (2, 8), the first interaction direction is determined to be upward. And if the start coordinate is (2, 8) and the end coordinate is (2, 1), determining that the first interaction direction is downward.
In step S1033, the third video is determined to be the second video when the first interactive direction is the first direction, and the fourth video is determined to be the second video when the first interactive direction is the second direction.
In some embodiments, the first direction and the second direction are preset in advance, the first direction is different from the second direction, and the first direction and the second direction may be opposite to each other.
With the above example in mind, assuming that the first interactive operation is a sliding operation, the first interactive direction is upward, the first direction is also upward, and the order of the videos in the first cache list is the third video, the first video, and the fourth video, that is, the third video is the last video of the first video, the third video is determined to be the second video.
With the above example in mind, assuming that the first interactive operation is a sliding operation, the first interactive direction is downward, the second direction is also downward, and the order of the videos in the first cache list is the third video, the first video, and the fourth video, that is, the fourth video is the next video of the first video, the fourth video is determined to be the second video.
Through the steps S1031 to S1033, video content is dynamically determined based on the cache list and the interaction direction, so as to provide smooth and efficient video switching experience for the user. At least three videos (a first video and a recommended video) are stored in the first cache list in advance, so that the user can load the videos immediately when switching the videos, loading delay caused by real-time video resource request from a server when switching the videos each time is avoided, user experience is improved, video switching is free of blocking, and operation is smoother. The method and the device flexibly select the recommended next video (the third video or the fourth video) according to the first interaction direction (such as upward sliding or downward sliding) of the user, accurately meet the operation requirement of the user, and enable the interaction behavior to be consistent with the feedback of the system, so that the control feeling of the user on the application is improved, and the situation that the switching direction is wrong or the recommendation of the content does not accord with expectations is avoided. And storing the recommended third video and fourth video in the cache list, so that the switched content is ensured to be the high-correlation content recommended based on the algorithm, the user is ensured to obtain the content highly correlated with the interest, the content consumption efficiency is improved, and the user loss caused by untimely loading of the content is reduced. Through a caching mechanism, the webpage application acquires related video resources in advance, reduces the real-time request pressure on a server during frequent switching, reduces network requests, saves bandwidth and server resources, improves the expandability of the system, and is suitable for simultaneous use by more users. By combining a caching mechanism and an interaction direction, video switching is faster and more visual, dissatisfaction caused by delay or error switching is reduced, and the intelligent degree of web page application is improved.
In some embodiments, before step S1031, the first buffer list may be further constructed, based on which, when the first video, the third video, and the fourth video are stored in the first buffer list, referring to fig. 7, the first buffer list may be constructed by the following steps S001 to S004 before step S1031, which will be described in detail below.
In step S001, a recommendation list is acquired.
In the embodiment of the application, a first video and a plurality of videos to be recommended are stored in a recommendation list.
In some embodiments, the recommendation list is a collection of a set of content to be recommended generated by the recommendation system based on the access content or content features disclosed by the user. The content to be recommended in the recommendation list is ordered according to priority, wherein the content to be recommended can be one of short videos, movies, television shows and the like.
In some embodiments, the recommendation list may be generated prior to step S001, and the method of generating the recommendation list may be one of a rule-based generation method, a collaborative filtering (Collaborative Filtering) generation method, a context recommendation-based generation method, and a deep learning-based generation method. The recommendation list can improve the content matching degree and the user viscosity, and meanwhile, the content distribution efficiency of the platform is optimized.
In some embodiments, after the recommendation list is generated, the recommendation list is deemed to be acquired.
In step S002, two candidate videos adjacent to the first video are acquired from the recommendation list.
In some embodiments, a first location of the first video in the recommendation list may be determined first, the first location may be represented by an arrangement order number of the first video in the recommendation list, e.g., the first video is the first video in the recommendation list, then the first location is 1, a second location adjacent to the first location is acquired, wherein the second location stores the first candidate video, and a third location stores the second candidate video, then the first candidate video is acquired from the second location, and the second candidate video is acquired from the third location.
In step S003, the candidate video located before the first video is determined as the third video, and the candidate video located after the first video is determined as the fourth video.
In some embodiments, if the second location is located before the first location and the third location is located after the first location, the first candidate video is determined to be the third video and the second candidate video is determined to be the fourth video.
In step S004, a first cache list is constructed using the third video, the first video, and the fourth video.
In some embodiments, an initial list that is empty may be created first, and then the third video, the first video, and the fourth video may be added to the initial list in sequence, to obtain the first cache list.
Through the steps S001 to S004, the loading and displaying of the recommended video are optimized through the mechanism of constructing the cache list, and the user experience and the system efficiency are improved. Candidate videos (a third video and a fourth video) adjacent to the currently played video (the first video) are stored in the recommendation list, and can be immediately loaded and played when a user switches, so that loading delay when the videos are switched is avoided, seamless watching experience is provided, and the method is suitable for high-frequency switching scenes such as short videos and content stream platforms. And determining adjacent videos as recommendation candidates, and conforming to the logic paths (the previous video and the next video) of the user watching stream, thereby improving the correlation and logic of the recommended content, avoiding the irrelevant videos from interrupting the user watching experience, and supporting continuous content recommendation, such as scenario-type short videos and thematic content. The current video (the first video) and the adjacent videos (the third video and the fourth video) are stored in the cache list together, and quick access is provided for switching or replay, so that the number of network requests is reduced, the pressure of real-time loading is reduced, the system efficiency is improved, and the situation of poor network conditions can be dealt with. The first video and the adjacent video are added into the first cache list in advance, and the user directly uses the cache content when switching back and forth without repeated loading, so that the bandwidth and the server resources are saved, and the response speed of switching operation is improved.
In some embodiments, if the third video is determined as the second video in the above step S1033, the following steps S1034A to S1036A, which are specifically described below, may also be performed, referring to fig. 8, after the above step S1033.
In step S1034A, the fourth video is deleted from the first buffer list.
In some embodiments, the fourth video may be determined from the first cache list based on a third identification of the fourth video, and the delete operation may be re-triggered to delete the fourth video. For example, the built-in method remove () may be triggered to delete the fourth video.
In step S1035A, a fifth video adjacent to and before the second video is acquired from the recommendation list.
In some embodiments, the implementation process of obtaining the fifth video is similar to the implementation process of obtaining the third video in the above steps S002 and S003, and thus, the implementation process of obtaining the fifth video may refer to the implementation process of obtaining the third video in the above steps S002 and S003.
In step S1036A, the fifth video is added to the first cache list.
In an embodiment of the present application, the fifth video is located before the second video.
In some embodiments, the fifth video, the second video, and the first video are stored in the first cache list in order.
Through the steps S1034A to S1036A, the order of the first buffer list can be maintained by deleting and adding videos, so that the videos in the first buffer list always comprise the currently played video and two adjacent candidate videos, resources are saved, and meanwhile, the real-time loading of the videos is ensured, and the response speed is improved.
In another embodiment, if the fourth video is determined as the second video in the step S1033, the following steps S1034B to S1036B may be further performed after the step S1033, see fig. 9, and are specifically described below.
In step S1034B, the third video is deleted from the first buffer list.
In step S1035B, a sixth video adjacent to and subsequent to the second video is acquired from the recommendation list.
In step S1036B, the sixth video is added to the first cache list.
In an embodiment of the present application, the sixth video is located after the second video.
In some embodiments, the implementation of the steps S1034B to S1036B is similar to the implementation of the steps S1034A to S1036A, and thus, the implementation of the steps S1034B to S1036B may refer to the implementation of the steps S1034A to S1036A.
In some embodiments, before the step S103 of responding to the first interactive operation for the first video is performed, the method may further include displaying a second control in a playing interface of the preset player, where the second control is a control for displaying information to be recommended, responding to a triggering operation for the second control, obtaining the information to be recommended, pausing playing the first video, and displaying the information to be recommended in the display interface.
In some embodiments, the type of information to be recommended may be at least one of images, video, audio, text. The triggering operation for the second control may be a single click operation, a long press operation, or a double click operation for the second control.
For example, when receiving the click operation for the second control, determining that the trigger operation for the second control is received, and acquiring information to be recommended.
In some embodiments, the second control further carries an acquisition address and a storage address of the information to be recommended, where the acquisition address refers to a source address used for acquiring the information to be recommended, and may be a path pointing to a remote server or a local storage location for accessing the recommended information, and the acquisition address points to the source location of the recommended information for requesting and loading the recommended data. The storage address refers to a target address for storing information to be recommended, and can be a designated position for storing the recommended information for subsequent use, and points to a storage position of the recommended information for recording and storing the recommended information. The web page application requests the recommendation information from the acquired address by acquiring the address loading recommendation information, and stores the recommendation information by storing the address, and stores the recommendation information to the local for subsequent use after loading. The dynamic loading, caching and storing of the recommendation information can be realized through the synergistic effect of the acquired address and the storage address.
In some embodiments, after receiving the trigger operation for the second control, the information to be recommended can be automatically acquired based on the acquisition address and the storage address, then the first video is paused in the playing interface, and finally the information to be recommended is displayed in the display interface.
In some embodiments, the implementation of "displaying information to be recommended in a display interface" may include the following two:
the first implementation mode is that information to be recommended is displayed in a playing interface.
In the second implementation mode, a recommendation interface is displayed in a floating layer or popup window mode in the display interface, and the recommendation interface can cover part or all of the playing interface and display information to be recommended in the recommendation interface.
When the information to be recommended is video, the information to be recommended needs to be played by utilizing a playing interface of a preset player, namely the first implementation mode is adopted.
In some embodiments, after the "responding to the triggering operation for the second control", the method may further include obtaining an index value of the first video and a first cache list containing the first video, obtaining the first video from the first cache list based on the index value when the display duration of the information to be recommended reaches a duration threshold or a closing instruction for the information to be recommended is received, and continuing playing the first video in the playing interface.
In some embodiments, when the information to be recommended is displayed in the first implementation manner, after the triggering operation for the second control is responded, an index value of the first video is also obtained, where the index value is used to characterize a position of the first video in the first cache list, and since the first cache list is already loaded when the first video is played, the first cache list is kept loaded at this time.
In some embodiments, the duration threshold is a playing duration corresponding to the information to be recommended, where the duration threshold corresponds to the information to be recommended, and the "when the display duration of the information to be recommended reaches the duration threshold" is characterized, the playing of the information to be recommended is completed, or the playing of the information to be recommended is ended.
In some embodiments, a closing control for the information to be recommended may be further displayed on the playing interface, and when a click operation for the closing control is received, it is determined that a closing instruction for the information to be recommended is received.
In some embodiments, when the playing of the information to be recommended is finished or a closing instruction for the information to be recommended is received, a first video is obtained from the first cache list based on the index value, and the first video is continuously played in the playing interface.
In some embodiments, the implementation process of continuing to play the first video in the playing interface may include obtaining a playing time of the first video when the information to be recommended begins to be displayed, obtaining a second video frame corresponding to the playing time from the first video, and playing the first video in the playing interface with the second video frame as a start frame.
In some embodiments, when the information to be recommended starts to be displayed, a playing time of the first video is obtained in real time, and a second video frame corresponding to the playing time is obtained from the first video, where the playing time may be 1 minute and 20 seconds, and the 2400 th video frame is determined as the second video frame when the 2400 th video frame is being played at the playing time, based on the determining, the 2400 th video frame is taken as a starting frame, and the first video is played in the playing interface, that is, the 2400 th video frame is played to the last video frame in the playing interface, so as to realize continuous playing of the first video.
In some embodiments, after the display of the information to be recommended is finished, the original playing progress of the first video can be accurately returned, and manual operation of a user is not needed, so that watching interruption feeling is reduced, user watching experience is smoother, content playing is not split due to the insertion of the information to be recommended, satisfaction of the user on a platform is improved, and discontents caused by advertisement insertion are reduced. And after the display of the recommended information is finished, the first video is directly resumed, so that the watching of the user is abandoned due to interruption, the viscosity of the user is improved, and the stay time of the user on the platform is prolonged. And storing the first cache list, and not needing to reload the first video after the display of the information to be recommended is finished, so that server resources and network flow are saved, video content is cached, the playing can be quickly recovered, and the watching experience is not required to be influenced by loading delay. And displaying the information to be recommended in the video playing process, and simultaneously storing the first video information watched by the user, so that the information to be recommended and the content playing can be connected in a seamless manner.
In other embodiments, before the step S103 of responding to the first interactive operation for the first video, the method may further include obtaining a reference display direction corresponding to the first video and a first display direction where a playing interface of the preset player is located, generating a rotation prompt message when the reference display direction is different from the first display direction, where the rotation prompt message is used to prompt a direction of the rotation display, and displaying the rotation prompt message in the playing interface.
In some embodiments, metadata of the first video may be obtained and parsed to obtain parsed metadata, and then a reference display direction may be obtained from the parsed metadata, where the reference display direction refers to an optimal presentation direction of video content of the first video relative to the playback interface in the first video presentation. The metadata of the first video is additional information describing the content of the first video, and covers basic information, technical attributes, identification information and the like of the first video. The metadata provides important support for video presentation, management, recommendation and retrieval, and is of great importance in scenes such as short video platforms, streaming media platforms and the like. The reference display direction of the first video may be used as a part of metadata to provide guidance for the preset player and system, ensuring that the first video is displayed with an optimal visual effect. The method not only optimizes the user experience, but also provides basic support for recommending, classifying and adapting different devices of the video.
In some embodiments, reference display directions are used to optimize visual effects and content adaptation of video in different interactive scenes, and are generally divided into Landscape display directions (Landscape) and Portrait display directions (portraits). The reference display orientation may be, for example, landscape or portrait.
For example, when the first display direction is horizontal, the representation reference display direction is different from the first display direction, a text or voice type rotation prompt message, such as text "please turn the playing device to vertical", is generated, and finally the rotation prompt message is displayed in the playing interface.
In some embodiments, the implementation process of displaying the rotation prompt information in the playing interface may be that a prompt interface is displayed in the playing interface, a first area of the prompt interface is smaller than a second area of the playing interface, and the rotation prompt information is displayed in the prompt interface.
In some embodiments, the prompt interface may be displayed in the form of a floating layer or a popup window in the playing interface, where the first area of the prompt interface is smaller than the second area of the playing interface, and the shape of the prompt interface may be quadrilateral, circular, elliptical, hexagonal, or the like, and the embodiment is not limited to this shape.
In some embodiments, the implementation process of displaying the prompt interface in the playing interface may be that the prompt interface is displayed in the playing interface based on the preset position information, or the background image of each video frame in the first video is determined, the prompt interface is displayed in a background area of the playing interface, and the background area is an area for displaying the background image.
In some embodiments, the preset position information may include two diagonal coordinate values, through which a quadrilateral prompting interface may be determined, and may also include a circle center coordinate value and a radius, through which a circular prompting interface may be determined. For example, in order not to affect the first video playing effect, the area determined based on the two diagonal coordinate values may be a lower left area of the playing interface, that is, the area where the prompt interface is located at the lower left area of the playing interface. Similarly, the alert interface may also be located in the upper left, lower right, or upper right region of the play interface.
In some embodiments, the prompt interface may be displayed in a floating layer in the playing interface, and transparency of the prompt interface may also be set, so that the rotation prompt information may be synchronously displayed without affecting the playing of the first video, thereby optimizing the display effect and improving the viewing experience.
In some embodiments, the background image of each video frame in the first video may be determined by any one of a frame difference method, a mean method, a median method, background modeling, image segmentation, and corner detection, and the method for determining the background image is not limited in this embodiment. Based on this, when a video frame is displayed in the playback interface, the area in which the background image is displayed is referred to as a background area, and then the presentation interface is displayed in the form of a floating layer in the background area. Therefore, if the background area changes along with the switching of the video frames, the position of the prompt interface in the playing interface also changes dynamically, so that the foreground part of the video frames can be ensured not to be blocked while the first video is synchronously played and the rotation prompt information is displayed, and the display effect is improved.
With continued reference to fig. 3, the above step S103 is continued for explanation.
In step S104, the second video is played using the web application.
In some embodiments, the implementation process of the step S104 may be that a reference volume value of the environment where the terminal is located is determined, the terminal is a terminal running the web application, a play volume value is determined based on the reference volume value, and the second video is played with the play volume value by using the web application.
In some embodiments, the reference volume value of the environment can be acquired by the audio acquisition component of the terminal, for example, 50 db, and the implementation of the method for determining the play volume value based on the reference volume value comprises the following two steps:
In a first implementation manner, a reference mapping table is obtained, wherein the reference mapping table stores reference volume values, play volume values and corresponding relations between the reference volume values and the play volume values, and the corresponding play volume values are determined from the reference mapping table based on the reference volume values. For example, 70 db may be determined as the play volume based on 50 db and the reference map.
In a second implementation manner, a volume difference is obtained, where the volume difference is a value preset in advance according to experience, and the volume difference may be, for example, 10 db, 15 db, 20 db, etc., and then the sum of the reference volume value and the volume difference is determined as the play volume. For example, assuming that the reference volume value is 50 db and the volume difference is 15 db, 65 db is determined as the play volume.
In some embodiments, if the first implementation is adopted, the second video is played with the web application at 70 db, where the implementation of playing the second video may refer to the implementation of playing the first video with the web application. In addition, in the course of playing the second video, if a second interactive operation for the second video is received, similarly to the above-described step S102 to the above-described step S104, video switching is performed based on the second interactive operation.
Through the steps S101 to S104, a first control is displayed in a display interface of a first application, the first control is a control for playing a first video through a web application, the first video is from a second application, the first application and the second application are different from the web application, the first application is also different from the second application, when a playing instruction for the first video is received based on the first control, the first video is played through the web application, and in the process of playing the first video, a second video corresponding to the first interactive operation can be determined in response to the first interactive operation for the first video, and the second video is played through the web application, namely, the video playing switching is realized through the first interactive operation, so that the convenience of video switching is improved, the playing efficiency of other non-web application videos through the web application is improved, the video switching function is enriched, and the interactivity in the video playing process is also improved.
In some embodiments, if the above step S1033 is to determine the third video as the second video, the following steps S105 to S111, which will be specifically described below, may also be performed after the above step S104, see fig. 10.
In step S105, in response to the second interactive operation for the second video, a time interval between the second interactive operation and the first interactive operation is determined.
In some embodiments, a first time to trigger a first interaction may be acquired first, and a second time to trigger a second interaction may be acquired, and a difference between the second time and the first time may be determined as a time interval.
For example, assuming a first time to trigger a first interaction of 14:46:20 and a second time to trigger a second interaction of 14:46:30, 10 seconds is determined as the time interval.
In step S106, it is determined whether the time interval is greater than or equal to an interval threshold.
In some embodiments, the interval threshold is an empirically set value in advance, and exemplary interval thresholds may be 5 seconds, 10 seconds, 15 seconds, etc. If the time interval is greater than or equal to the interval threshold, the time interval representing the two interactive operations is normal, representing that the second video is normally played, the step S107 is entered, and if the time interval is less than the interval threshold, representing that the time interval of the two interactive operations is abnormal, representing that the second video is interrupted, the step S109 is entered.
In step S107, the first video or the fifth video in the first cache list is determined to be the seventh video corresponding to the second interactive operation based on the second interactive direction of the second interactive operation.
In the embodiment of the application, the first video, the second video and the fifth video are stored in the first cache list.
Here, the first cache list is characterized as conforming to the current use scene, then responding to the second interaction operation based on the first cache list, and determining a seventh video from the first video and the fifth video of the first cache list based on a second interaction direction of the second interaction operation. The first cache list stores a fifth video, a second video and a first video in sequence.
In some embodiments, the implementation of the step S107 is similar to the implementation of the step S1033, and thus, the implementation of the step S107 may refer to the implementation of the step S1033.
In step S108, the seventh video is played using the web application. And ends the flow.
In some embodiments, the implementation of the step S108 is similar to the implementation of the step S104, and thus, the implementation of the step S108 may refer to the implementation of the step S104.
In step S109, a second cache list is acquired.
In the embodiment of the present application, the first video, the eighth video, and the ninth video are stored in the second cache list.
Here, the first cache list is characterized as not conforming to the current use scenario, and the first cache list is replaced by the second cache list.
In some embodiments, the video type in the second cache list is different from the video type of the first cache list, and illustratively, the video type in the second cache list may be a video that belongs to the same subject as the first video, and the video type in the first cache list may be a video that has a high hotness value associated with the first video.
In step S110, the eighth video or the ninth video is determined as a seventh video based on the second interaction direction.
In some embodiments, the implementation of the step S110 is similar to the implementation of the step S1033, and thus, the implementation of the step S110 may refer to the implementation of the step S1033.
In step S111, a seventh video is played using the web application.
In some embodiments, the implementation of the step S111 is similar to the implementation of the step S104, and thus, the implementation of the step S111 may refer to the implementation of the step S104.
Through the steps S105 to S111, when the situation of video playing interruption is detected, the cache list is replaced in time, so as to ensure that the video to be recommended accords with the use scene, and promote the video playing effect.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
According to the embodiment of the application, the player and the screen sliding component are deployed based on the Web terminal, and video playing optimization is performed on the Web terminal, so that the immersive video experience of the Web terminal is realized, the immersive video experience is the same as the native application deployed by the terminal operating system, and the immersive video experience of the Web terminal can be applied to any equipment terminal supporting a browser across terminals. Through the immersion video experience of uniformity for H5 web page end video after sharing through APP application accepts the content, can reach the content experience the same with APP, realized the uniformity of product. The Web terminal corresponds to the Web application in other embodiments, and the player corresponds to the preset player in other embodiments.
The immersion stream presents the content in a list form, providing a more immersive reading experience, as compared to conventional Feed streams. The immersion flow highlights the details and characteristics of the rich media content, enhances the participation degree and interactivity of the user, and improves the participation feeling and social experience of the user. At the same time, the immersion flow style typically presents the content in a large or full screen format, providing greater presentation space and better visual effect for the recommended information (e.g., advertisements), resulting in higher user engagement and commercial revenue. Through the test gray level immersion scheme, forward benefits are provided for the exposure click index of the business page, and the effective power-assisted business benefit is improved.
The floor is realized through the Web, and the cross-terminal application of the Web component can be applied to each equipment terminal to realize the cross-terminal function. The anti-shake design and detection are performed by using the original user gesture calculation processing, a virtual list is constructed, and the memory is timely recovered, so that high performance is realized. Seamless sliding of the content is realized through content preloading, and user experience is enhanced. The immersive stream presents the content in a list form as compared to a conventional Feed stream, providing a more immersive reading experience. The immersion flow highlights the details and characteristics of the rich media content, enhances the participation degree and interactivity of the user, and improves the participation feeling and social experience of the user.
Fig. 11A is a first schematic illustration of video playing at a web page end according to an embodiment of the present application, where the embodiment of the present application can achieve the effect of sliding down one screen, when the screen displays the display interface 1001 shown in fig. 11A, the screen responds to the sliding operation in the downward direction, so that switching of video playing can be achieved, and the display interface 1001 is switched to the display interface 1002, as shown in fig. 11B, and fig. 11B is a second schematic illustration of video playing at a web page end according to an embodiment of the present application.
In some embodiments, referring to FIG. 12, FIG. 12 is a schematic illustration of one structural representation of a video playback framework provided by an embodiment of the present application, the video playback framework 1100 comprising an immersive player card 1101, a slider assembly 1102, and a recommendation list 1103, wherein the immersive player card 1101 comprises a play control management assembly 1011, a player 1012, and a child (child) assembly 1013, and the slider assembly 1102 comprises a slider animation 1021 and a virtual list 1022. In addition, the player 1012 has functions of video preloading, event triggering, progress bar display, double-speed play, and the like.
In some embodiments, referring to FIG. 12, when a sliding operation is detected by sliding component 1102, on the one hand, immersive player card 1101 is invoked, and on the other hand, a change event (onChange) is invoked, a data preload is performed to retrieve a list of message streams from a recommendation list via the change event (feedList), and the video in the list of message streams is stored or transmitted in the form of an immersive stream representation via player 1012, and then the video can be played, paused, or replayed via the event invoking the player. With continued reference to FIG. 12, the change event and recommendation list are implemented by business testing, and the immersive player card 1101, the sliding component 1102, may be implemented by component packaging.
In some embodiments, an immersion stream is used for the player 1012 and the slider assembly 1102, wherein the player 1012 plays with a third party player that has the characteristics of smooth play, efficient decoding, resource occupancy, interface friendliness, quick response, and the like.
In some embodiments, based on FIG. 12 described above, embodiments of the present application provide a call schematic of call relationships between components, as shown in FIG. 13, page 1201 calls immersion player card 1101 through import (import). When a sliding operation is detected, the sliding component 1102 invokes the immersive player card 1101, the immersive player card 1101 renders the sliding component 1102 by rendering an animation (RENDERSLIDE), and then invokes the video sliding component 1202 and starts the player 1012. Wherein the immersive player card 1101, video slider component 1202, and player 1012 all belong to the components.
First, the animation flow is slid.
The sliding effect comprises triggering to switch the next video in response to pulling up to the screen neutral line or sliding up to a certain speed and loosening hands, and triggering to switch the previous video in response to pulling down to the screen neutral line or sliding up to a certain speed and loosening hands.
Fig. 14 is a flowchart for implementing the sliding effect according to the embodiment of the present application, referring to fig. 14, the implementing process of the sliding effect may include the following steps:
step S1301, the initial coordinates and the start time are acquired.
In the embodiment of the application, a touch screen (touch) event can be detected in real time, the moment is determined as the starting time when the touch starts (touchStart), and the initial coordinates of the moment acting on the display screen are acquired.
In step S1302, the current coordinates are acquired.
In some embodiments, during the touch movement (touchMove) phase, the current coordinates, which refer to the location of the touch point at the time of acquisition, may be periodically acquired.
In step S1303, the offset amount and the offset angle are determined.
In some embodiments, the distance between the initial coordinate and the current coordinate may be determined on the one hand, and the distance may be determined as an offset, and on the other hand, the moving line segment may be determined based on the initial coordinate and the current coordinate, and then an included angle between the moving line segment and a reference line may be determined as an offset angle, where the reference line is a line parallel to the long side of the screen or a line parallel to the short side of the screen.
In step S1304, it is determined whether the offset angle is smaller than an angle threshold.
In some embodiments, the angle threshold may be a value set in advance, and exemplary, the angle threshold may be 3 degrees, 5 degrees, 10 degrees, etc. If the offset angle is greater than or equal to the angle threshold, indicating that the received touch operation is a false operation, step S1305 is entered, i.e. the page content is kept unchanged, while if the offset angle is less than the angle threshold, indicating that the received touch operation is a sliding operation, step S1306 is entered.
In step S1305, the page content is kept unchanged. And ends the flow.
In the embodiment of the application, the method includes that the video played in the page is unchanged, namely, the video being played in the page is continuously played.
In step S1306, the control page moves following the slide operation.
In the embodiment of the application, the video in the page can be controlled to move along with the movement of the sliding operation.
In step S1307, it is determined whether the offset is greater than 0.
In some embodiments, if the offset is greater than 0, indicating that the sliding operation is an upward slide, the page content is moved upward, i.e., step S1308, and if the offset is less than 0, indicating that the sliding operation is a downward slide, the page content is moved downward, i.e., step S1308.
In step S1308, the page content is moved upward. And proceeds to step S1310.
In the embodiment of the application, the video can be played in an upward moving way.
In step S1309, the page content is moved downward.
In the embodiment of the application, the video can be played in a downward moving way.
In step S1310, the end coordinates and the end time are acquired.
At this time, the touch end (touchEnd) stage is entered, and in some embodiments, the implementation of step S1310 is similar to the implementation of step S1301, so the implementation of step S1310 may refer to the implementation of step S1301.
In step S1311, the duration of the operation is determined.
In some embodiments, the difference between the end time and the start time is determined as the duration.
In step S1312, it is determined whether the duration is less than the duration threshold.
In the embodiment of the present application, the time length threshold is set in advance according to experience, and exemplary time length thresholds may be 200 ms, 300 ms, and the like.
In some embodiments, if the duration is less than the duration threshold, it is determined that the sliding operation is received, the page turning operation is performed, that is, step S1313 is performed, and if the duration is greater than or equal to the duration threshold, the characterization is an incorrect operation, step S1314 is performed.
In step S1313, a page turning operation is performed. And ends the flow.
In some embodiments, the page flip operation refers to video switching. For example switching the video to the previous video or to the next video.
In step S1314, a second offset is determined.
In some embodiments, the distance between the initial and end coordinates is determined as the second offset.
In step S1315, it is determined whether the second offset amount is smaller than a preset distance.
In some embodiments, the preset distance is a value set in advance based on experience, and illustratively, the preset distance may be 1/3 of the screen length, 1/2 of the screen length, or the like. If the second offset is less than the preset distance, indicating that the received error operation is performed, the step S1316 is performed, and if the second offset is greater than or equal to the preset distance, the step S1313 is performed after confirming that the sliding operation is performed.
In step S1316, the current page content is held.
At this time, the currently played video is kept, and the video switching is not performed.
And secondly, sliding the virtual list processing flow.
When the user does not stop brushing the video, the page performance problem will occur to the continuous additional element node (Document Object Model node, DOM) node, so that a virtual list needs to be constructed, and referring to fig. 15, the functions of the virtual list may include that when the video slides to the position 2, the video at the position 0 is recovered, the video at the position 4 is created, the page is ensured to always maintain the designated number of video players, and simultaneously, the video slides to the position 2 in cooperation with the preloading of the recommendation list, and the data of the next brushing is loaded. Wherein the virtual list corresponds to the first cache list in other embodiments.
And thirdly, automatically playing the video.
Although the player < video/> tag itself supports an automatic play (autoplay) attribute, in order to enhance the user experience, each browser manufacturer prevents accidental media play from occurring during the process of browsing a web page by the user, limits the automatic play, and the browser does not allow the audio media file to be automatically played before no interactive operation. This limitation is still acceptable for the original single video play scenario, but it is very disturbing for such immersive video to the user experience. The embodiment of the application can realize automatic playing through the callback of jsapi.
And fourthly, video preloading processing.
In the embodiment of the application, after the first video is played, the next video content and the last video content are preloaded through the task list and written into the cache, and the playing is directly invoked from the cache when the first video is switched.
In some embodiments, fig. 16 is a schematic flow chart of a video preloading method provided in the embodiment of the present application, where the video preloading method may be performed by a computer device, and in the embodiment of the present application, the computer device is taken as an example for illustration. Referring to fig. 16, the video loading method includes the steps of:
in step S1501, a play request is received.
In step S1502, the play request is converted into a proxy link (proxyUrl).
In the embodiment of the application, the forwarding of the playing request can be realized through proxy link.
In step S1503, the proxy link is parsed.
In some embodiments, by examining and analyzing the proxy link, the resource to which the proxy link points is determined.
In step S1504, it is determined whether a file of a preset format is included.
In the embodiment of the present application, the preset format may refer to m3u8 type, and based on this, in this embodiment, it is determined whether the resource pointed by the proxy link includes a file of m3u8 type.
In some embodiments, if a file of a preset format is included, step S1505 is entered, otherwise step S1509 is entered.
In step S1505, it is determined whether the file in the preset format includes a recommendation list.
In some embodiments, whether the file in the preset format includes the recommendation list may be determined by parsing the file in the preset format. If the recommendation list is not included, it is characterized that the file in the preset format is abnormal or wrong, for example, the file in the preset format is missing, and the process goes to S1506, if the recommendation list is included, it is characterized that the file in the preset format is correct and complete, and the process goes to S1508.
In step S1506, a file of a preset format is downloaded.
In some embodiments, the file in the pre-set format may be re-downloaded based on the address in the proxy link.
In step S1507, the recommendation list is cached.
In some embodiments, if the file in the preset format includes the play address of the recommendation list, the recommendation list may be cached based on the play address of the recommendation list.
In step S1508, the links in the recommendation list are parsed and replaced in their entirety.
In some embodiments, the links in the recommendation list are parsed to obtain parsed links, and updated links are updated or replaced. Wherein the parsed links point to unencrypted video.
In step S1509, it is determined whether or not a cache list exists.
In some embodiments, step S1510 is entered if no cache list exists, otherwise step S1512 is entered. The cache list corresponds to the first cache list in other embodiments.
In step S1510, a download task is created.
At this point, no cache list exists, a task for downloading the cache list is created.
In step S1511, the cache list is written.
In some embodiments, the video is added to the cache list.
In step S1512, the data is returned.
In some embodiments, the video may be retrieved from a cache list.
In step S1513, the flow ends.
Through the above steps S1501 to S1513, the creation of the cache list is realized, and the preloading of the video is realized based on the cache list.
And fifthly, positioning returned across the terminal page.
The video in the video list may have a click-through bit by which the page may jump, such as an advertisement jumping to an advertisement bottom page, etc. In the related operating system, the problem is inconsistent, in the related operating system 1, the page return can still be positioned to the video before the jump due to the fact that the Back-Forward Cache (bfcache) is hit, and in the related operating system 2, the page return can directly refresh the page and return to the first video again.
In an embodiment of the application, the values of data list and currentIndex are stored before the page jumps, and the page is relocated to currentIndex when returned, thereby solving the relocation problem.
In some embodiments, fig. 17 is a schematic diagram of a frame structure of video positioning provided in the embodiments of the present application, referring to fig. 17, assuming that five videos, namely, video 6011, video 6012, video 6013, video 6014, and video 6015 are included in a cache list 1601, and assuming that a touch operation for an advertisement button 6021 is received during playing of the video 6012 by using a video immersion stream 1602, a page jump instruction is considered to be received, the cache list 0221 and a video index value 0222 are stored in a session storage (sessionStorage) object 6022, then, a jump is made to an advertisement bottom page 1603, and finally, when the advertisement playing is completed, the video immersion stream 1602 is returned to continue playing the video 6012 based on the session storage object 6022.
And sixthly, adapting the transverse screen.
When the terminal does not open the vertical screen for locking, the whole browser page can cross, and disorder of the page can be caused, so that the embodiment of the application increases the horizontal screen detection, and when the screen of the terminal is detected to cross, the prompt can be performed through a popup window (Toast), for example, the popup window outputs 'please rotate a mobile phone screen, and the vertical screen experience is better'.
Through testing, firstly, the average exposure click rate of the video pulling bit is increased by 56.44 percent on the day, the average exposure click rate is increased by 53.81 percent on the day, which is beneficial to improving the income, and secondly, in the testing stage, the average residence time of the bottom page is increased by 16.42 percent, the average residence time of the second-level page is increased by 21.88 percent, the total playing is increased by 3086w, and the commercialized income is increased by 1.53 percent.
It can be appreciated that in the embodiment of the present application, related data such as the first video, the preset image, the first cache list, the recommendation list, the second cache list, the index value, the playing time, the reference display direction, the first display direction, and the like are related, and the collection, the use and the processing of the related data need to comply with related laws and regulations and standards.
Continuing with the description below of an exemplary architecture of the video playback device 455 implemented as a software module provided by embodiments of the present application, in some embodiments, as shown in fig. 2, the software modules stored in the video processing device 455 of the memory 450 may include:
The system comprises a first display module 4551 for displaying a first control in a display interface of a first application, wherein the first control is a control for playing a first video through a web application, the first video is from a second application, the first application and the second application are different from the web application, a first playing module 4552 for receiving a playing instruction for the first video based on the first control and playing the first video through the web application, a first determining module 4553 for determining a second video corresponding to the first interaction operation in response to the first interaction operation for the first video, and a second playing module 4554 for playing the second video through the web application.
In some embodiments, the first playing module 4552 is further configured to invoke a preset player with the web application, display a playing interface of the preset player on the display interface, where a screen ratio of the playing interface is greater than a ratio threshold, and play the first video on the playing interface with the preset player.
In some embodiments, the software modules stored in the video processing device 455 of the memory 450 further include a first response module for pausing playing the first video in response to a pause instruction for the first video, and a second display module for displaying at least one of a play control and a replay control in the play interface.
In some embodiments, the software modules stored in the video processing device 455 of the memory 450 further include at least one of a second response module for continuing to play the first video in response to a trigger operation for the play control, and a third response module for replaying the first video in response to a trigger operation for the replay control.
In some embodiments, the software modules stored in the video processing device 455 of the memory 450 further include a third display module configured to obtain a first video frame and display the first video frame on the playing interface until a trigger operation for the playing control or a trigger operation for the replay control is received, where the first video frame is a video frame displayed on the playing interface when the pause instruction is received, or a fourth display module configured to obtain a preset image and display the preset image on the playing interface until a trigger operation for the playing control or a trigger operation for the replay control is received.
In some embodiments, the software modules stored in the video processing device 455 of the memory 450 further include a fourth response module for displaying at least two candidate multi-speeds in response to a multi-speed adjustment instruction for the first video, a fifth response module for determining the selected candidate multi-speed as a target multi-speed in response to a selection operation for the candidate multi-speed, and a third play module for playing the first video at the target multi-speed in the play interface.
In some embodiments, the first determining module 4553 is further configured to obtain a first cache list, where at least the first video, a third video to be recommended, and a fourth video are stored in the first cache list, determine a first interaction direction of the first interaction operation, determine the third video as the second video when the first interaction direction is a first direction, and determine the fourth video as the second video when the first interaction direction is a second direction.
In some embodiments, when the first video, the third video, and the fourth video are stored in the first cache list, the software module stored in the video processing device 455 of the memory 450 further includes a first obtaining module configured to obtain a recommendation list in which the first video and a plurality of videos to be recommended are stored, a second obtaining module configured to obtain two candidate videos adjacent to the first video from the recommendation list, a second determining module configured to determine a candidate video located before the first video as the third video and a candidate video located after the first video as the fourth video, and a first constructing module configured to construct the first cache list using the third video, the first video, and the fourth video.
In some embodiments, the software modules stored in the video processing device 455 of the memory 450 further include a first deletion module for deleting the fourth video from the first cache list, a third acquisition module for acquiring a fifth video adjacent to and before the second video from the recommendation list, and a first addition module for adding the fifth video to the first cache list, the fifth video being before the second video.
In some embodiments, the software modules stored in the video processing device 455 of the memory 450 further include a second deletion module for deleting the third video from the first cache list, a fourth acquisition module for acquiring a sixth video adjacent to and subsequent to the second video from the recommendation list, and a second addition module for adding the sixth video to the first cache list, the sixth video being subsequent to the second video.
In some embodiments, the software modules stored in the video processing device 455 of the memory 450 further include a sixth response module for determining a time interval between the second interactive operation and the first interactive operation in response to a second interactive operation for the second video, a third determination module for determining a seventh video corresponding to the second interactive operation based on the first video or the fifth video in the first cache list in which the first video, the second video, and the fifth video are stored when the time interval is greater than or equal to an interval threshold based on a second interactive direction of the second interactive operation, and a fourth play module for playing the seventh video using the web application.
In some embodiments, the software modules stored in the video processing device 455 of the memory 450 further include a fifth obtaining module configured to obtain a second cache list, where the first video, the eighth video, and the ninth video are stored in the second cache list, when the time interval is less than the interval threshold, a fourth determining module configured to determine the eighth video or the ninth video to be the seventh video based on the second interaction direction, and a fifth playing module configured to play the seventh video using the web application.
In some embodiments, the second playing module 4554 is further configured to determine a reference volume value of an environment in which a terminal is located, where the terminal is a terminal running the web application, determine a playing volume value based on the reference volume value, and play the second video at the playing volume value using the web application.
In some embodiments, the software modules stored in the video processing device 455 of the memory 450 further include a fifth display module configured to display a second control in a playing interface of a preset player, where the second control is a control for displaying information to be recommended, a seventh response module configured to obtain the information to be recommended in response to a triggering operation for the second control, and a sixth display module configured to pause playing the first video and display the information to be recommended in the display interface.
In some embodiments, the software modules stored in the video processing device 455 of the memory 450 further include a sixth obtaining module configured to obtain an index value of the first video and a first cache list containing the first video, a seventh obtaining module configured to obtain the first video from the first cache list based on the index value when a display duration of the information to be recommended reaches a duration threshold or a close instruction for the information to be recommended is received, and a sixth playing module configured to continue playing the first video in the playing interface.
In some embodiments, the sixth playing module is further configured to obtain a playing time of the first video when the information to be recommended starts to be displayed, obtain a second video frame corresponding to the playing time from the first video, and play the first video in the playing interface with the second video frame as a start frame.
In some embodiments, the software modules stored in the video processing device 455 of the memory 450 further include an eighth obtaining module configured to obtain a reference display direction corresponding to the first video and a first display direction in which a playing interface of a preset player is located, a first generating module configured to generate, when the reference display direction is different from the first display direction, a rotation prompt message, where the rotation prompt message is used to prompt to rotate the displayed direction, and a seventh display module configured to display the rotation prompt message in the playing interface.
In some embodiments, the seventh display module is further configured to display a prompt interface in the playing interface, where a first area of the prompt interface is smaller than a second area of the playing interface, and display the rotation prompt information in the prompt interface.
In some embodiments, the seventh display module is further configured to display the alert interface in the playback interface based on preset location information, or determine a background image of each video frame in the first video, and display the alert interface in a background area of the playback interface, where the background area is an area in which the background image is displayed.
Embodiments of the present application provide a computer program product or computer program comprising computer executable instructions stored in a computer readable storage medium. The processor of the computer device reads the computer executable instructions from the computer readable storage medium, and the processor executes the computer executable instructions, so that the computer device executes the video playing method according to the embodiment of the application.
Embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, cause the processor to perform a video playback method provided by embodiments of the present application, for example, the video playback method shown in fig. 3 and 10.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM, or various devices including one or any combination of the above.
In some embodiments, computer-executable instructions may be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, in the form of programs, software modules, scripts, or code, and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, computer-executable instructions may, but need not, correspond to files in a file system, may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (Hyper Text Markup Language, HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, computer-executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.