CN112702626A - Video file playing switching method, server, client, equipment and medium - Google Patents
Video file playing switching method, server, client, equipment and medium Download PDFInfo
- Publication number
- CN112702626A CN112702626A CN202011387176.9A CN202011387176A CN112702626A CN 112702626 A CN112702626 A CN 112702626A CN 202011387176 A CN202011387176 A CN 202011387176A CN 112702626 A CN112702626 A CN 112702626A
- Authority
- CN
- China
- Prior art keywords
- file
- video
- video segment
- resolution
- playing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000004590 computer program Methods 0.000 claims description 6
- 230000001960 triggered effect Effects 0.000 claims description 4
- 230000003321 amplification Effects 0.000 abstract description 21
- 238000003199 nucleic acid amplification method Methods 0.000 abstract description 21
- 230000010365 information processing Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 7
- 101100517651 Caenorhabditis elegans num-1 gene Proteins 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 241000023320 Luma <angiosperm> Species 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 3
- 238000004806 packaging method and process Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000005192 partition Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000010079 rubber tapping Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47202—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234336—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by media transcoding, e.g. video is transformed into a slideshow of still pictures or audio is converted into text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
- H04N21/2387—Stream processing in response to a playback request from an end-user, e.g. for trick-play
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440236—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The embodiment of the application relates to the technical field of information processing, in particular to a video file playing switching method, a server, a client, equipment and a medium. The method comprises the following steps: receiving an amplification playing instruction aiming at a first video file sent by a client; the first video file has a first resolution; determining a file set formed by video segment files from the video segment files according to the first resolution and position information of a playing interface corresponding to each video segment file obtained by performing airspace slicing on a second video file with the second resolution in advance; the second resolution is greater than the first resolution; the total amount of pixel points contained in each image frame obtained by performing airspace combination on the image frames of each video segment file in the file set is not less than the total amount of pixel points contained in a single image frame of the first video file. By adopting the embodiment of the application, the video picture can be clearly amplified, so that high-quality video watching experience is provided.
Description
Technical Field
The embodiment of the application relates to the technical field of information processing, in particular to a video file playing switching method, a server, a client, equipment and a medium.
Background
With the development of the internet and the popularization of intelligent equipment, more and more people watch videos through the intelligent equipment; in order to meet the increasingly rich video watching requirements of users, a video playing platform usually provides the functions of reverse or fast forward playing, variable speed playing, video picture zooming in or out, and the like for users to select. However, the inventors found that the following problems exist in the related art: when a user selects to play an amplified video, the frame image of the original video is usually directly stretched, and the image obtained by directly stretching is blurred and poor in quality, so that the high-quality watching requirement of the user cannot be met.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, a server, a client, a device, and a medium for switching playing of a video file, which can clearly amplify a video frame to provide a high-quality video viewing experience.
In order to solve the above technical problem, an embodiment of the present application provides a video file play switching method, including: receiving an amplification playing instruction aiming at a first video file sent by a client; the first video file is a target video file with a first resolution; determining a file set formed by video segment files from the video segment files according to the first resolution and position information of a playing interface corresponding to each video segment file obtained by airspace slicing of a second video file in advance; the second video file is a target video file with a second resolution; the second resolution is greater than the first resolution; the file set satisfies: the total amount of pixel points contained in each image frame obtained by performing spatial domain combination on the image frames of each video segment file in the file set is not less than the total amount of pixel points contained in a single image frame of the first video file; and triggering the client to switch from playing the first video file to playing the video file obtained by performing spatial domain combination on the image frames of all the video segment files in the file set based on the file set.
An embodiment of the present application further provides a video file play switching method, including: after the amplified playing operation aiming at a first video file is detected, an amplified playing instruction aiming at the first video file is sent to a server; the first video file is a target video file with a first resolution; receiving a file set formed by video segment files sent by a server, wherein the file set satisfies the following conditions: each video segment file in the file set is determined from video segment files obtained by performing airspace slicing on a second video file, and the total amount of pixel points contained in each image frame obtained by performing airspace combination on the image frames of each video segment file in the file set is not less than the total amount of pixel points contained in a single image frame of the first video file; the second video file is a target video file with a second resolution; the second resolution is greater than the first resolution; and switching playing the first video file into playing a video file obtained by performing spatial domain combination on image frames of all the video segment files in the file set based on a file set formed by the video segment files sent by the server.
An embodiment of the present application further provides a server, including: the system comprises an instruction receiving module, an amplification and identification module and an instruction sending module; the instruction receiving module is used for receiving an amplification playing instruction aiming at a first video file sent by a client; the first video file is a target video file with a first resolution; the amplification identification module is used for determining a file set formed by video segment files from the video segment files according to the first resolution and the position information of the playing interface corresponding to each video segment file obtained by airspace slicing of a second video file in advance; the second video file is a target video file with a second resolution; the second resolution is greater than the first resolution; the file set satisfies: the spatial domain combination size of the image frames of each video segment file in the file set is matched with the first resolution; the instruction sending module is used for triggering the client to play the first video file based on the file set and switching the playing to the video file obtained by performing airspace combination on the image frames of the video segment files in the file set.
An embodiment of the present application further provides a client, including: the device comprises an instruction sending module, an instruction receiving module and a playing module; the instruction sending module is used for sending an amplification playing instruction for a first video file to a server after detecting the amplification playing operation for the first video file; the first video file is a target video file with a first resolution; the instruction receiving module is used for receiving a file set formed by video segment files sent by a server, wherein the file set satisfies the following conditions: determining each video segment file in the file set from video file segments obtained by performing airspace slicing on a second video file; the airspace combination size of the image frames of each video segment file is matched with the first resolution; the second video file is a target video file with a second resolution; the second resolution is greater than the first resolution; the playing module is used for switching playing of the first video file into playing of a video file obtained by airspace combination of image frames of each video segment file in the file set based on the file set formed by the video segment files sent by the server.
An embodiment of the present application further provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the video file playing switching method.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the method for switching playing of a video file is implemented.
Compared with the prior art, the method and the device for playing the first video file have the advantages that the amplification playing instruction aiming at the first video file sent by the client side is received; the first video file is a target video file with a first resolution; determining a file set formed by video segment files from the video segment files according to the first resolution and position information of a playing interface corresponding to each video segment file obtained by airspace slicing of a second video file in advance; the second video file is a target video file with a second resolution; the second resolution is greater than the first resolution; the file set satisfies: the total amount of pixel points contained in each image frame obtained by performing spatial domain combination on the image frames of each video segment file in the file set is not less than the total amount of pixel points contained in a single image frame of the first video file; triggering the client to switch from playing a first video file to playing a video file obtained by performing spatial domain combination on image frames of all video segment files in the file set based on the file set; that is, the target video file is processed in advance, and a second video file with a resolution larger than that of the first video file is obtained; the second video file is subjected to airspace slicing to obtain each video segment file, a file set is determined from the video segment files to be played, the total amount of pixel points contained in each image frame obtained by performing airspace combination on the image frames of the video segment files in the file set is not less than that contained in a single image frame of the first video file, namely the video segments with higher image definition are assembled into the file set to be played, so that the problem of image quality reduction caused by directly stretching and playing each image frame of the first video file is solved, the video images are clearly amplified, and high-quality video watching experience is provided.
In addition, the client sends the instruction for playing the first video file in an amplifying manner after detecting the operation of playing the first video file in the playing interface.
In addition, determining a file set formed by video segment files from the video segment files according to the first resolution and the position information of the playing interface corresponding to each video segment file obtained by airspace slicing of a second video file in advance comprises the following steps: determining a file set formed by video segment files from the video segment files according to the first resolution, the target position information of the playing interface corresponding to the amplified playing instruction and the position information of the playing interface corresponding to each video segment file obtained by airspace slicing of a second video file in advance; that is to say, when determining the file set formed by the video segment files, the playing interface corresponding to the amplified playing instruction is determined in a targeted manner, so that the playing interface corresponding to the amplified playing instruction is amplified in a targeted manner.
In addition, the file collection includes: the video segment file corresponding to the target position information and a plurality of video segment files adjacent to the video segment file corresponding to the target position information in a spatial domain; the determining, according to the first resolution, the target position information of the playing interface corresponding to the enlarged playing instruction, and the position information of the playing interface corresponding to each video segment file obtained by performing airspace slicing on a second video file, a file set formed by video segment files matched with the target position information from the video segment files, includes: determining a video segment file corresponding to the target position information according to the target position information of the playing interface corresponding to the amplified playing instruction and the position information of the playing interface corresponding to each video segment file obtained by performing airspace slicing on a second video file in advance; determining a plurality of video segment files adjacent to the video segment file corresponding to the target position information in a spatial domain according to the size of the video segment file corresponding to the target position information and the first resolution; it can be understood that the video segment file corresponding to the target position information and the plurality of video segment files adjacent to each other in the airspace can present a relatively complete amplified video picture.
Additionally, the first resolution includes a first vertical pixel sum and a first horizontal pixel sum; in the vertical direction, the number H of the video segment files adjacent to the video segment file corresponding to the target position information in the airspace is calculated by the following formula:
wherein h (scale) represents the first vertical pixel sum, and h (targettile) represents a target height of a video segment file corresponding to the target position information; in the horizontal direction, the number W of video segment files adjacent to the video segment file corresponding to the target position information in the airspace is calculated by the following formula:
wherein w (scale) represents the first horizontal pixel sum, and w (targettile) represents a target width of a video segment file corresponding to the target position information.
In addition, after the triggering of the client switches from playing the first video file to playing the video file obtained by performing spatial domain combination on the image frames of each video segment file in the file set, the method further includes: if a zoom-out playing instruction sent by the client is received, the client is triggered to play a video file obtained by performing airspace combination on image frames of all video segment files in the file set, and the video file is switched to play the first video file, namely, the video played in the zoom-in state is restored to the initial state to be played, so that the real-time watching requirement of a user on the video is flexibly met.
Drawings
One or more embodiments are illustrated by the corresponding figures in the drawings, which are not meant to be limiting.
Fig. 1 is a schematic flowchart of a video file playing switching method according to a first embodiment of the present application;
fig. 2 is a first playback effect diagram in the first embodiment of the present application;
FIG. 3 is a view showing the slicing effect in the first embodiment of the present application;
FIG. 4 is a schematic illustration of spatial slicing a second video file in a first embodiment of the present application;
FIG. 5 is a schematic diagram of a document collection in a first embodiment of the present application;
fig. 6 is a second playback effect diagram in the first embodiment of the present application;
FIG. 7 is a functional schematic of a first embodiment of the present application;
fig. 8 is a schematic flowchart of a video file playing switching method according to a second embodiment of the present application;
fig. 9 is a schematic structural diagram of a server in the third embodiment of the present application;
fig. 10 is a schematic structural diagram of a client according to a fourth embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device in a fifth embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the following describes each embodiment of the present application in detail with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in various embodiments of the present application in order to provide a better understanding of the present application. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
A first embodiment of the present application relates to a method for switching video file playing, a specific flow is shown in fig. 1, and the method includes:
102, determining a file set formed by video segment files from the video segment files according to the first resolution and position information of a playing interface corresponding to each video segment file obtained by airspace slicing of a second video file in advance;
and 103, triggering the client to switch from playing the first video file to playing the video file obtained by performing spatial domain combination on the image frames of each video segment file in the file set based on the file set.
The following describes details of the video file playing switching method in this embodiment in detail, and the following only provides details for easy understanding, but is not necessary to implement the present invention.
The video file playing switching method in the embodiment can be applied to scenes in which a user watches videos on a video website or a video APP through intelligent equipment; the user amplifies the video by operating the intelligent device, and the server in the intelligent device executes the video file playing switching method in the embodiment.
In step 101, the server receives an enlarged playing instruction for a first video file sent by a client. In one embodiment, the server may receive, through a screen of the smart device, an enlarged playing instruction of the first video file sent by the client, where the enlarged playing instruction is sent after an enlarged playing operation for the first video file is detected when the client plays the first video file. For example, in the process of playing a video by a user, the user taps the screen several times (specifically, for example, 2 times) continuously as an enlarged playing operation, or the user slides two fingers away from the screen in the reverse direction simultaneously as an enlarged playing operation; it is to be understood that the above description is intended to be illustrative, and not restrictive.
The first video file is a target video file with a first resolution.
It should be noted that, when the server receives the zoom-in playing instruction, the server may further record a timestamp included in the received zoom-in playing instruction, so as to determine the start time of the acquired video segment file according to the timestamp. The timestamp in the play-up instruction may be a time when the client detects a play-up operation of the user.
In step 102, the server determines a file set formed by the video segment files from the video segment files according to the first resolution and position information of a playing interface corresponding to each video segment file obtained by performing airspace slicing on the second video file in advance; the second video file is a target video file with a second resolution; the second resolution is greater than the first resolution; the file set satisfies: the total amount of pixel points contained in each image frame obtained by performing spatial domain combination on the image frames of each video segment file in the file set is not less than the total amount of pixel points contained in a single image frame of the first video file.
Specifically, the codec converter and the packaging unit in the server may perform a series of processes on the target video file in advance, which will be described below.
Firstly, transcoding a target video file according to specified resolution (second resolution), code rate and the like by using a transcoding device according to actual playing requirements to obtain the target video file with the second resolution; for example, assuming that the second resolution is 3840 × 2160 and the first video file is a target video file in a 1080P format (i.e., the first resolution is 1920 × 1080), the second video file with 3840 × 2160 resolution is obtained as the target video file by upsampling the first video file.
Then, a packing device can be used for carrying out space domain slicing on the second video file according to a Hevc tile coding mode, namely, an image is divided into a plurality of rectangular areas from the horizontal direction and the vertical direction on a space domain, and each rectangular area is called as a tile area; in this embodiment, each frame image may be divided into m × n tile regions, where m and n are even numbers not less than 4, and the size of each tile region after division is equal. After the airspace slicing is performed, the divided videos can be packaged according to the DASH standard, and a video segment file (for example, an m4s file) corresponding to each tile area and a configuration file (for example, an mpd file) corresponding to the video segment file are generated, where the configuration file stores the location information of the playing interface (i.e., the tile area) corresponding to the video segment file corresponding to the configuration file, and when the client needs to play the video, the location information stored in the configuration file can be read, and then the video segment file is obtained according to the location information. In one embodiment, the location information of the profile store may be as follows:
<SupplementalProperty schemeIdUrl=urn:mpeg:dash:srd:2014"value="1,0,0,960,540"/>
wherein, value "1" in "1,0, 960,540" represents the ID of the video segment file, "0,0" represents that the position coordinate of the playing interface corresponding to the video segment file in the whole video image is (0,0), "960" represents the width of the playing interface corresponding to the video segment file, and "540" represents the height of the playing interface corresponding to the video segment file.
In one embodiment, a second video file with a time length of 30s is spatially sliced, each frame image is divided into 4 × 4 tile regions in a spatial domain, and the video is divided according to a rule of every 5s in a temporal domain, so that the divided second video files are packaged to obtain 4 × 4 × (30/5) ═ 96 video segment files and configuration files corresponding to the video segment files.
Specifically, the server may determine the file set composed of the video segment files from the video segment files according to the first resolution and the position information of the playing interface corresponding to each video segment file, and may determine the file set composed of the video segment files from the video segment files according to the first resolution, the target position information of the playing interface corresponding to the enlarged playing instruction and the position information of the playing interface corresponding to each video segment file, that is, the following sub-step 1021 and sub-step 1022.
And 1021, determining the video segment file corresponding to the target position information according to the target position information of the playing interface corresponding to the amplified playing instruction and the position information of the playing interface corresponding to each video segment file obtained by performing airspace slicing on the second video file in advance.
Specifically, for example, when the user makes the client send an enlarged playing instruction by tapping the screen several times, the user usually taps the place that the user wants to enlarge, so the server can analyze the enlarged playing instruction to obtain the target position (i.e. the position tapped by the user) information of the playing interface corresponding to the enlarged playing instruction, and the target position is regarded as the central position that the user wants to enlarge; and determining a video segment file corresponding to the target position information according to the target position information of the playing interface corresponding to the amplified playing instruction, and viewing the video segment file as the video segment file corresponding to the central position which the user wants to amplify.
In one embodiment, a first playing effect map of the server when playing the first video file is shown in fig. 2; since the first video file and the second video file are video files with different resolutions but the same video picture, when the second video file is spatially sliced, the second video file is spatially sliced by 4 × 10 division method with reference to the slice effect diagram shown in fig. 3 to obtain 4 × 10 video segment files, the 4 × 10 video segment files are shown in the blocks marked with 1 to 40 in fig. 4, and the playing interfaces corresponding to the 4 × 10 video segment files are also understood as shown in the blocks marked with 1 to 40 in fig. 4, and the playing interface shown in each block corresponds to the position information, for example, the position information of the playing interface shown in the block marked with 1 is (1, 1), the position information of the playing interface shown in the block marked with 2 is (1, 2) … …, the position information of the playing interface shown in the block marked with 11 is (2, 1) … … the box numbered 40 shows the location information of the playback interface as (4, 10). In this embodiment, when the user taps the screen for several times to make the client send the enlarged playing command in the square with the position information of (2, 2), the target position information of the playing interface corresponding to the enlarged playing command is obtained by analyzing as (2, 2), that is, the playing interface with the reference numeral 12 is regarded as the central position that the user wants to enlarge, and the video segment file corresponding to the central position is the video segment file shown in the square with the reference numeral 12.
Of course, in the embodiment of the present invention, one piece of position information may be stored in advance in the server as default target position information (e.g. coordinates of the center position of the interface), so that when the zoom-in playing instruction is received, the corresponding video segment is determined according to the default target position information, and is used as the video segment file corresponding to the center position that the user wants to zoom in.
Step 1022, according to the size and the first resolution of the video segment file corresponding to the target position information, determining a plurality of video segment files adjacent to the video segment file corresponding to the target position information in the airspace. The size of the video segment file corresponding to the target position information comprises a target height and a target width, and the first resolution comprises a first vertical pixel sum and a first horizontal pixel sum.
Specifically, the number of the video segment files adjacent to the video segment file corresponding to the target position information in the airspace may be determined in the following manner:
(1) in the vertical direction, according to the target height h (targertile) of the video segment file corresponding to the target position information and the first vertical pixel sum h (scale), calculating the vertical proportion tile _ row _ num ═ h (scale)/h (targertile);
in the horizontal direction, the horizontal proportion tile _ column _ num ═ w (scale)/w (targettile) is calculated according to the target width w (targettile) of the video segment file corresponding to the target position information and the first horizontal pixel sum w (scale).
(2) Taking a playing interface corresponding to the video segment file corresponding to the target position information as a center, acquiring H video segment files adjacent to the playing interface corresponding to the video segment file corresponding to the target position information in a space domain in the vertical direction, and calculating H according to the following formula:
and acquiring W video segment files adjacent to the playing interface corresponding to the video segment file corresponding to the target position information in the airspace in the horizontal direction, wherein W is calculated by the following formula:
more specifically, in connection with the embodiment in step 1021, when the target position information is (a, B), H video segment files can be acquired in the vertical direction by:
calculating the value of (tile _ row _ num/2) integer number as A1, and acquiring the video segment file with the position information of (A-A1, B) … … (A-1, B), namely acquiring the video segment file upwards; calculating the value of integral numbers of ((tile _ row _ num-1) - (tile _ row _ num/2)) as A2, and acquiring a video segment file with the position information of (A +1, B) … … (A + A2, B), namely acquiring the video segment file downwards; the sum of the number of video segment files whose position information is (A-A1, B) … … (A-1, B) and the number of video segment files whose position information is (A +1, B) … … (A + A2, B) is H.
And can acquire W video segment files in the horizontal direction by:
calculating the value of (tile _ column _ num/2) integer number as B1, and acquiring the video segment file with the position information of (A, B-B1) … … (A, B-1), namely acquiring the video segment file to the left; calculating (tile _ column _ num-1) - (tile _ column _ num/2)) integer values as B2, and acquiring a video segment file with position information of (A, B +1) … … (A, B + B2), namely acquiring the video segment file to the right; the sum of the number of video segment files whose location information is (A, B-B1) … … (A, B-1) and the number of video segment files whose location information is (A, B +1) … … (A, B + B2) is W.
If the playing interface corresponding to the video segment file corresponding to the target position information (A, B) is located in the head line in the whole video image, when the video segment file with the position information of (A-A1, B) … … (A-1, B) is acquired, the playing interface can be changed into the video segment file with the acquisition position information of (A +1, B) … … (A + A1, B); similarly, when the playing interface corresponding to the video segment file corresponding to the target position information is located at the peripheral edge of the whole video image, the adjacent video segment file can be acquired in the opposite direction of the current acquiring direction when the adjacent video segment file cannot be acquired in a certain direction. If the calculated integral number of video segment files cannot be acquired in a certain direction, the video segment files can be acquired in the direction opposite to the current acquisition direction to supplement the number.
In one example, the second video file has a second resolution of 4K, such as 3840 (second horizontal pixel sum) × 2160 (second vertical pixel sum), and is spatially divided into 4 × 10 tile regions, where each tile region has a height h (targertile) 540 and a width w (targertile) 384;
the first video file has a first resolution of 1920 × 1080, i.e.: h (scale) ═ 1080, w (scale) ═ 1920;
calculating to obtain tile _ row _ num ═ 2 and tile _ column _ num ═ 5;
if (tile _ row _ num/2) is calculated to be 1 and the integer is 1, then 1 adjacent video segment file is obtained in a space domain by taking the video segment file corresponding to the target position information (a, B) as the center, and the position information is (a-1, B); (tile _ row _ num-1) - (tile _ row _ num/2)) -0, then no adjacent video segment file is acquired in the spatial domain;
if (tile _ column _ num/2) is calculated to be 2.5, and the integer is 3, then 3 adjacent video segment files are acquired in a space domain by taking the video segment file corresponding to the target position information (a, B) as the center, and the position information is (a, B-3), (a, B-2), (a, B-1); (tile _ column _ num-1) - (tile _ column _ num/2)) -1, taking the video segment file corresponding to the target position information (a, B) as the center, and acquiring 1 adjacent video segment files in the airspace, wherein the position information is (a, B + 1);
to sum up, there are 2 video segment files in the vertical direction and 5 video segment files in the horizontal direction, and the finally obtained file set is a file set including the 2 video segment files and the 5 video segment files, that is, the file set is composed of 2 rows and 5 columns of video segment files, and the picture size obtained by spatial domain combination of the image frames of each video segment file in the file set is: the height is h (targertile) x 2 is 1080, the width is w (targertile) x 5 is 1920, that is, the sum of vertical pixels is 1080 and the sum of horizontal pixels is 1920, and the height is matched with the first resolution (1920 × 1080), that is, the total number of pixels included in a single image frame of the first video file with the resolution of 1080P is not less than.
In one example, the second video file is spatially sliced to obtain 4 x 10 video segment files as shown in fig. 4, as indicated by the blocks numbered 1-40; analyzing the target position information of the playing interface corresponding to the amplified playing instruction obtained by the amplified playing instruction to be (3, 4), namely taking the video segment file shown by the square with the reference number of 24 as the center; according to the above calculation result, 1 adjacent video segment file with the position information of (2, 4) (i.e. the video segment file with the reference number of 14) is obtained in the space domain with the video segment file shown by the square with the reference number of 24 as the center; acquiring 3 adjacent video segment files with the position information of (3, 1), (3, 2) and (3, 3) (namely the video segment files with the labels of 21, 22 and 23) in a spatial domain; acquiring 1 adjacent video segment file with the position information of (3, 5) (namely, the video segment file with the reference number of 25) in a space domain; in summary, the resulting file collection contains video segment files with reference numbers 14, 21, 22, 23, 24 and 25, i.e. the file collection is composed of video segment files with reference numbers 11 to 15 and 21 to 25, as shown in the file collection diagram of fig. 5.
In step 103, the server triggers the client to switch from playing the first video file to playing the video file obtained by spatially combining the image frames of each video segment file in the file set based on the file set. As an example, the first play effect before the switch may be as shown in fig. 2, and the second play effect after the switch may be as shown in fig. 6.
Specifically, the server may encode the playing interface corresponding to each video segment file in the file set in a raster scanning manner or the like, where the encoded result is the position information of each playing interface; and searching and acquiring the video segment file corresponding to the position information of each playing interface according to the configuration file of each video segment file to serve as the video segment file in the file set. As can be seen from the foregoing description, the configuration file corresponding to the video segment file stores the location information of the playing interface corresponding to the video segment file, so that the location information stored in each configuration file can be compared according to the obtained location information of the playing interface corresponding to each video segment file, and when the obtained location information of the playing interface corresponding to each video segment file matches the location information stored in a certain configuration file, the video segment file corresponding to the configuration file can be obtained as the video segment file in the file set.
After each video segment file in the file set is obtained, the server performs airspace combination on each video segment file, and the obtained video file is played as an amplified video file. In this embodiment, the video segment file is obtained by dividing the second video file in a Hevc tile encoding manner, and then the Hevc video file is obtained after assembling and splicing the video segment file and can be returned to the client for playing in a streaming media form; when the video band file is assembled and spliced, a total sequence parameter set sps, a total image parameter set pps and slice header information of each playing interface of the Hevc video file can be generated according to parameters such as resolution of the video band file, and the slice header information of each tile area is spliced in sequence to obtain a final Hevc video code stream, specifically:
in the total image parameter set pps, the total column number of image partitions (num _ tile _ columns _ minus _1), the total row number of image partitions (num _ tile _ row _ minus _1), the column width of each video segment file in the file set after spatial domain combination (column _ width _ minus1), the height (row _ height _ minus1), and the like may be generated from the horizontal and vertical scales (i.e., tile _ row _ num and tile _ column _ num) calculated in step 1022; in the overall sequence parameter set sps, picture resolution information parameters (pic _ width _ in _ luma _ samples and pic _ height _ in _ luma _ samples) may be generated according to the height and width of each video segment file in the set of files after spatial combination; in the slice header information of each playing interface, an address (slice _ segment _ address) of each playing interface is calculated according to the resolution of each video segment file and the number of lines of the video file obtained by performing spatial domain combination on each video segment file, for example: the tile address of each video segment file in the first row of the video file obtained after the spatial domain combination is as follows: the address of the previous video segment file + ((target width of video segment file +64-1)/64) × ((target height of video segment file + 64-1)/64); the address of each video segment file in the nth row of the video files obtained after the spatial domain combination is as follows: the address of the last video segment file in the first line x (n-1) + the address of the previous video segment file in the line + ((target width of video segment file +64-1)/64) × ((target height of video segment file +64-1)/64), etc., where 64 is the minimum luma coded block.
In addition, in this embodiment, after the server triggers the client to play the video segment file by performing the spatial domain combination on the image frames of the video segment files in the file set, if the server receives the pair zoom-out play instruction, the client is triggered to switch to play the first video file, that is, the video played in the zoom-in state is restored to the first video file with lower resolution and played. In an implementation manner, the server may also receive, through the screen of the smart device, a zoom-out instruction sent by the client after detecting the zoom-out operation, for example, an operation of tapping the screen several times by the user is used as the zoom-out operation, and an operation of sliding two fingers of the user on the screen in a reverse direction at the same time is also used as the zoom-out operation; it is to be understood that the above description is intended to be illustrative, and not restrictive.
The following describes a flow of the video file playing switching method in this embodiment with reference to a functional diagram shown in fig. 7. First, as shown in fig. 7, with reference to the description in step 102, the server performs a series of processes on a target video file in advance through a transcoding device in the server according to an actual playing requirement, so as to obtain a target video file with a first resolution (e.g., a 1080P target video file, i.e., a first video file) and a target video file with a second resolution (e.g., a 4K target video file, i.e., a second video file); then, performing space domain slicing on target video files with different resolutions according to a Hevc tile coding mode through a packaging device, and dividing the target video files with different resolutions into a plurality of tile areas respectively; and then finding out and packaging each sliced target video file according to the DASH standard to generate video segment files of a plurality of areas of each target video file and configuration files corresponding to the video segment files.
After the processing is carried out in advance, when a user operates a screen of the intelligent device and sends a playing instruction of the first video file through the client, a user instruction receiving device of the server receives the playing instruction of the first video file and transmits the playing instruction to a playing device of the server; the playing instruction for the first video file can be understood as an instruction for requesting video segment files of all regions of the first video file and configuration files corresponding to the video segment files, the playing device of the server obtains the video segment files of all regions of the first video file and the configuration files corresponding to the video segment files according to the playing instruction for the first video file, and as described in step 103, the video splicing device of the server performs spatial domain combination on the video segment files of a plurality of regions of the first video file, and returns the video segment files to the playing device of the server, and the playing device of the server returns the first video file obtained by combination to the player of the client in the form of streaming media for playing.
When a user operates a screen of the intelligent equipment and sends an amplification playing instruction of a first video file through a client, a user instruction receiving device of the server receives the amplification playing instruction of the first video file and transmits the amplification playing instruction to a playing device of the server; the server amplifies the target position information of the playing interface corresponding to the playing instruction and the position information of the playing interface corresponding to each video segment file according to the first resolution, and determines a file set formed by the video segment files from the video segment files of the second video file, wherein the file set meets the following requirements: the total amount of pixel points contained in each image frame obtained by performing airspace combination on the image frames of each video segment file in the file set is not less than the total amount of pixel points contained in a single image frame of the first video file; as described in step 1021, the target position information of the playing interface corresponding to the enlarged playing instruction is obtained by analyzing the enlarged playing instruction by the general enlarged area recognition device of the server.
After the file set is determined, the playing device of the server acquires the file set formed by the video segment files and the configuration file corresponding to the video segment files, and as described in step 103, the video splicing device of the server performs spatial domain combination on the video segment files in the file set, and returns the video segment files to the playing device of the server, and the playing device of the server returns the video files obtained by combination to the player of the client in the form of streaming media for playing.
It should be noted that, if the client itself has a function of storing the first video file and the second video file, the client may also be used as an execution main body of the video file playing switching method in this embodiment.
Compared with the prior art, the method and the device for playing the first video file have the advantages that an amplification playing instruction for the first video file sent by a client is received; a first video file, which is a target video file with a first resolution; determining a file set formed by the video segment files from the video segment files according to the first resolution and the position information of the playing interface corresponding to each video segment file obtained by airspace slicing of the second video file in advance; the second video file is a target video file with a second resolution; the second resolution is greater than the first resolution; the file set satisfies: the total amount of pixel points contained in each image frame obtained by performing spatial domain combination on the image frames of each video segment file in the file set is not less than the total amount of pixel points contained in a single image frame of the first video file; based on the file set, triggering the client to switch from playing the first video file to playing a video file obtained by performing spatial domain combination on image frames of all video segment files in the file set; that is, the target video file is processed in advance, and a second video file with a resolution larger than that of the first video file is obtained; the second video file is subjected to airspace slicing to obtain each video segment file, a file set is determined from the video segment files to be played, the total amount of pixel points contained in each image frame obtained by performing airspace combination on the image frames of the video segment files in the file set is not less than that contained in a single image frame of the first video file, namely the video segments with higher image definition are assembled into the file set to be played, so that the problem of image quality reduction caused by directly stretching and playing each image frame of the first video file is solved, the video images are clearly amplified, and high-quality video watching experience is provided.
The second embodiment of the application relates to a video file playing switching method, which can be applied to a scene that a user watches videos on a video website or a video APP through self-energy equipment; a user amplifies a first video file being played by a client by operating an intelligent device, and the client executes the video file playing switching method in the embodiment; the specific process is shown in fig. 8, and includes:
Specifically, the zooming operation may be understood as an operation in which the user taps the screen several times in succession or slides the two fingers on the screen at the same time, for example, for the screen of the smart device; when a user operates the screen, the user usually operates the screen at a place where the user wants to enlarge the screen, and therefore the enlarged playing instruction generated according to the enlarged playing operation includes the target position information of the playing interface corresponding to the enlarged playing instruction.
Wherein the file set satisfies: each video segment file in the file set is determined from video segment files obtained by performing airspace slicing on a second video file, and the total amount of pixel points contained in each image frame obtained by performing airspace combination on the image frames of each video segment file in the file set is not less than the total amount of pixel points contained in a single image frame of the first video file; the second video file is a target video file with a second resolution; the second resolution is greater than the first resolution.
And step 203, switching to playing the first video file into a video file obtained by performing spatial domain combination on the image frames of all the video segment files in the set based on the file set formed by the video segment files sent by the server.
It should be understood that this embodiment is an embodiment corresponding to the client in the first embodiment, and the embodiment and the first embodiment can be implemented in cooperation. The details of the related art related to the client side mentioned in the first embodiment are still valid in this embodiment, and are not described herein again for the sake of reducing redundancy.
A third embodiment of the present application relates to a server, as shown in fig. 9, including: an instruction receiving module 301, an amplification identification module 302 and an instruction sending module 303.
The instruction receiving module 301 is configured to receive an amplification playing instruction for a first video file sent by a client; a first video file, which is a target video file with a first resolution;
the amplification identification module 302 is configured to determine a file set formed by the video segment files from the video segment files according to the first resolution and position information of playing interfaces corresponding to the video segment files obtained by performing airspace slicing on the second video file in advance; the second video file is a target video file with a second resolution; the second resolution is greater than the first resolution; the file set satisfies: the total amount of pixel points contained in each image frame obtained by performing spatial domain combination on the image frames of each video segment file in the file set is not less than the total amount of pixel points contained in a single image frame of the first video file;
the instruction sending module 303 is configured to trigger the client to switch from playing the first video file to playing a video file obtained by performing spatial domain combination on image frames of each video segment file in the file set based on the file set.
In one example, the instruction for playing the first video file in an enlarged manner sent by the client is sent after the operation for playing the first video file in an enlarged manner is detected when the client plays the first video file in the playing interface.
In an example, the enlarging and recognizing module 302 determines, according to the first resolution and the position information of the playing interface corresponding to each video segment file obtained by performing spatial domain slicing on the second video file in advance, a file set formed by the video segment files from the video segment files, including: and determining a file set formed by the video segment files from the video segment files according to the first resolution, the target position information of the playing interface corresponding to the amplified playing instruction and the position information of the playing interface corresponding to each video segment file obtained by performing airspace slicing on the second video file in advance.
In one example, a collection of files includes: the video segment files corresponding to the target position information and a plurality of video segment files adjacent to the video segment files corresponding to the target position information in a space domain; the enlargement identifying module 302 determines, from the video segment files, a file set formed by the video segment files matched with the target position information according to the first resolution, the target position information of the playing interface corresponding to the enlargement playing instruction, and the position information of the playing interface corresponding to each video segment file obtained by performing airspace slicing on the second video file in advance, and includes: determining a video segment file corresponding to the target position information according to the target position information of the playing interface corresponding to the amplified playing instruction and the position information of the playing interface corresponding to each video segment file obtained by performing airspace slicing on the second video file in advance; and determining a plurality of video segment files adjacent to the video segment file corresponding to the target position information on a spatial domain according to the size and the first resolution of the video segment file corresponding to the target position information.
In one example, the first resolution includes a first vertical pixel sum and a first horizontal pixel sum; in the vertical direction, the number H of the video segment files adjacent to the video segment file corresponding to the target position information in the airspace is calculated by the following formula:
wherein, h (scale) represents the first vertical pixel sum, and h (targettile) represents the target height of the video segment file corresponding to the target position information; in the horizontal direction, the number W of video segment files adjacent to the video segment file corresponding to the target position information in the airspace is calculated by the following formula:
w (scale) represents the first horizontal pixel sum, and w (targettile) represents the target width of the video segment file corresponding to the target position information.
In one example, after the instruction sending module 303 triggers the client to switch from playing the first video file to playing a video file obtained by spatially combining image frames of each video segment file in the file set, the method further includes: and if a reduction playing instruction sent by the client is received, triggering the client to play a video file obtained by performing spatial domain combination on the image frames of all the video segment files in the file set, and switching to play the first video file.
It should be understood that the present embodiment is a device embodiment corresponding to the first embodiment, and the present embodiment can be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the embodiments can also be applied to the first embodiment.
It should be noted that, all the modules involved in this embodiment are logic modules, and in practical application, one logic unit may be one physical unit, may also be a part of one physical unit, and may also be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present application, a unit that is not so closely related to solving the technical problem proposed by the present application is not introduced in the present embodiment, but this does not indicate that there is no other unit in the present embodiment.
A fourth embodiment of the present application relates to a client, as shown in fig. 10, including: an instruction sending module 401, an instruction receiving module 402, and a playing module 403;
the instruction sending module 401 is configured to send an amplification playing instruction for the first video file to the server after detecting an amplification playing operation for the first video file; a first video file, which is a target video file with a first resolution;
an instruction receiving module 402, configured to receive a file set formed by video segment files sent by a server, where the file set satisfies: each video segment file in the file set is determined from video segment files obtained by performing airspace slicing on a second video file, and the total amount of pixel points contained in each image frame obtained by performing airspace combination on the image frames of each video segment file in the file set is not less than the total amount of pixel points contained in a single image frame of the first video file; the second video file is a target video file with a second resolution; the second resolution is greater than the first resolution;
the playing module 403 is configured to switch, based on a file set formed by video segment files sent by the server, from playing a first video file to playing a video file obtained by performing spatial domain combination on image frames of each video segment file in the file set.
It should be understood that the present embodiment is a device embodiment corresponding to the second embodiment, and the present embodiment and the second embodiment can be implemented in cooperation. The related technical details mentioned in the second embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the embodiments can also be applied to the second embodiment.
It should be noted that, all the modules involved in this embodiment are logic modules, and in practical application, one logic unit may be one physical unit, may also be a part of one physical unit, and may also be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present application, a unit that is not so closely related to solving the technical problem proposed by the present application is not introduced in the present embodiment, but this does not indicate that there is no other unit in the present embodiment.
A fifth embodiment of the present application relates to an electronic device, as shown in fig. 11, which includes at least one processor 501; and a memory 502 communicatively coupled to the at least one processor 501; wherein the memory 502 stores instructions executable by the at least one processor 501, the instructions being executable by the at least one processor 501 to implement: receiving an amplification playing instruction aiming at a first video file sent by a client; a first video file, which is a target video file with a first resolution; determining a file set formed by the video segment files from the video segment files according to the first resolution and the position information of the playing interface corresponding to each video segment file obtained by airspace slicing of the second video file in advance; the second video file is a target video file with a second resolution; the second resolution is greater than the first resolution; the file set satisfies: the total amount of pixel points contained in each image frame obtained by performing spatial domain combination on the image frames of each video segment file in the file set is not less than the total amount of pixel points contained in a single image frame of the first video file; based on the file set, the client is triggered to switch from playing the first video file to playing the video file obtained by performing spatial domain combination on the image frames of all the video segment files in the file set.
Specifically, the electronic device includes: one or more processors 501 and a memory 502, with one processor 501 being an example in fig. 11. The processor 501 and the memory 502 may be connected by a bus or other means, and fig. 11 illustrates the connection by the bus as an example. The memory 502, which is a computer-readable storage medium, may be used to store computer software programs, computer-executable programs, and modules. The processor 501 executes various functional applications and data processing of the device by running computer software programs, instructions and modules stored in the memory 502, that is, the video file play switching method described above is realized.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 502 may optionally include memory located remotely from processor 501, which may be connected to an external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 502 and when executed by the one or more processors 501, perform the video file play switching method in any of the above-described method embodiments.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
A sixth embodiment of the present application relates to a computer-readable storage medium storing a computer program. When executed by a processor, the computer program implements the above-described embodiments of the video file play switching method.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the present application, and that various changes in form and details may be made therein without departing from the spirit and scope of the present application in practice.
Claims (12)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011387176.9A CN112702626A (en) | 2020-12-01 | 2020-12-01 | Video file playing switching method, server, client, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011387176.9A CN112702626A (en) | 2020-12-01 | 2020-12-01 | Video file playing switching method, server, client, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112702626A true CN112702626A (en) | 2021-04-23 |
Family
ID=75506094
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011387176.9A Pending CN112702626A (en) | 2020-12-01 | 2020-12-01 | Video file playing switching method, server, client, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112702626A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004289224A (en) * | 2003-03-19 | 2004-10-14 | Hitachi Information Systems Ltd | Video distribution server and video distribution system |
CN105744239A (en) * | 2016-05-11 | 2016-07-06 | 湖南源信光电科技有限公司 | Multi-focal-length lens ultrahigh resolution linkage imaging device |
WO2017016339A1 (en) * | 2015-07-27 | 2017-02-02 | 腾讯科技(深圳)有限公司 | Video sharing method and device, and video playing method and device |
CN108140401A (en) * | 2015-09-29 | 2018-06-08 | 诺基亚技术有限公司 | Access video clip |
-
2020
- 2020-12-01 CN CN202011387176.9A patent/CN112702626A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004289224A (en) * | 2003-03-19 | 2004-10-14 | Hitachi Information Systems Ltd | Video distribution server and video distribution system |
WO2017016339A1 (en) * | 2015-07-27 | 2017-02-02 | 腾讯科技(深圳)有限公司 | Video sharing method and device, and video playing method and device |
US20180035137A1 (en) * | 2015-07-27 | 2018-02-01 | Tencent Technology (Shenzhen) Company Limited | Video sharing method and device, and video playing method and device |
CN108140401A (en) * | 2015-09-29 | 2018-06-08 | 诺基亚技术有限公司 | Access video clip |
CN105744239A (en) * | 2016-05-11 | 2016-07-06 | 湖南源信光电科技有限公司 | Multi-focal-length lens ultrahigh resolution linkage imaging device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11683540B2 (en) | Method and apparatus for spatial enhanced adaptive bitrate live streaming for 360 degree video playback | |
JP6992119B2 (en) | Image data encapsulation | |
US20150208103A1 (en) | System and Method for Enabling User Control of Live Video Stream(s) | |
CN108063976B (en) | Video processing method and device | |
JP6624958B2 (en) | Communication device, communication system, communication control method, and computer program | |
US20150015789A1 (en) | Method and device for rendering selected portions of video in high resolution | |
EP4054190A1 (en) | Video data encoding method and device, apparatus, and storage medium | |
CN109698949B (en) | Video processing method, device and system based on virtual reality scene | |
CN113067994B (en) | Video recording method and electronic equipment | |
US11438645B2 (en) | Media information processing method, related device, and computer storage medium | |
KR102133207B1 (en) | Communication apparatus, communication control method, and communication system | |
CN110933461B (en) | Image processing method, device, system, network equipment, terminal and storage medium | |
KR101964126B1 (en) | The Apparatus And Method For Transferring High Definition Video | |
US20200213631A1 (en) | Transmission system for multi-channel image, control method therefor, and multi-channel image playback method and apparatus | |
CA3057924A1 (en) | System and method to optimize the size of a video recording or video transmission by identifying and recording a region of interest in a higher definition than the rest of the image that is saved or transmitted in a lower definition format | |
WO2021057689A1 (en) | Video decoding method and apparatus, video encoding method and apparatus, storage medium, and electronic device | |
CN112235600B (en) | Method, device and system for processing video data and video service request | |
KR101680545B1 (en) | Method and apparatus for providing panorama moving picture generation service | |
CN110662084B (en) | MP4 file stream live broadcasting method, mobile terminal and storage medium | |
US10878850B2 (en) | Method and apparatus for visualizing information of a digital video stream | |
JP2018535572A (en) | Camera preview | |
US20140082208A1 (en) | Method and apparatus for multi-user content rendering | |
CN111510643B (en) | System and method for splicing panoramic image and close-up image | |
JP5594842B2 (en) | Video distribution device | |
CN112702626A (en) | Video file playing switching method, server, client, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210423 |
|
RJ01 | Rejection of invention patent application after publication |