CN112954370B - Encoding method, device and equipment for audio and video live broadcast - Google Patents
Encoding method, device and equipment for audio and video live broadcast Download PDFInfo
- Publication number
- CN112954370B CN112954370B CN202110120084.2A CN202110120084A CN112954370B CN 112954370 B CN112954370 B CN 112954370B CN 202110120084 A CN202110120084 A CN 202110120084A CN 112954370 B CN112954370 B CN 112954370B
- Authority
- CN
- China
- Prior art keywords
- video
- mixed
- image
- image frames
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000003491 array Methods 0.000 claims abstract description 28
- 230000015654 memory Effects 0.000 claims description 21
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000003709 image segmentation Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 9
- 239000002699 waste material Substances 0.000 abstract description 6
- 230000005540 biological transmission Effects 0.000 abstract description 5
- 238000004590 computer program Methods 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2181—Source of audio or video content, e.g. local disk arrays comprising remotely distributed storage units, e.g. when movies are replicated over a plurality of video servers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440236—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention discloses a coding method, a device and equipment for audio and video live broadcast, wherein the method comprises the following steps: acquiring video image frames of a plurality of paths of videos to be pushed; expanding the video image frames into one-dimensional arrays, and generating a frame of mixed image frame after splicing the one-dimensional arrays corresponding to the video; and encoding the mixed image frames, pushing the encoded video images to a cdn network, and generating the picture data of the live audio and video. The embodiment of the invention does not directly splice the images, but splices the images after converting the images into one-dimensional data, thereby saving the data waste generated in the splicing process of the images and reducing the flow and bandwidth in the transmission process.
Description
Technical Field
The present invention relates to the field of live broadcasting technologies, and in particular, to a method, an apparatus, and a device for encoding audio and video live broadcasting.
Background
In the current online live broadcast field, live broadcast content generally includes a main broadcast video area and other material video areas (such as PPT, documents, video, images, etc.), as shown in fig. 1, in the live broadcast process, live broadcast content needs to be broadcast to a user for viewing through cdn. But for technical reasons the multi-stream is not processed at the cdn level, resulting in unsynchronized video being viewed by the user when the multi-stream is pulled. The current solution is to mix multiple paths of videos (multiple frames of images are spliced into one frame of image) on live broadcast equipment by using an image splicing method before pushing the multiple paths of videos to cdn, and push the video streams to cdn for live broadcast after mixing the multiple paths of videos. Although the problem of synchronization can be solved, when images are spliced, the image resolution of different video streams is different, so that an area without image information (the images are rectangular) is necessarily formed after the images are spliced. Therefore, the method for splicing the audio and video live broadcast mixed stream in the prior art can lead to that the image area after mixed stream splicing is larger than the sum of the image areas of all paths of video streams, and flow bandwidth waste is generated in the generation process.
Accordingly, the prior art is still in need of improvement and development.
Disclosure of Invention
In view of the shortcomings of the prior art, the invention aims to provide a coding method, a device and equipment for live audio and video, and aims to solve the technical problem that in the prior art, the image area after mixed stream splicing is larger than the sum of the image areas of all paths of video streams in live audio and video, and the flow bandwidth waste is generated in the generation process.
The technical scheme of the invention is as follows:
a coding method for live audio video, the method comprising:
acquiring video image frames of a plurality of paths of videos to be pushed;
expanding the video image frames into one-dimensional arrays, and generating a frame of mixed image frame after splicing the one-dimensional arrays corresponding to the video;
and encoding the mixed image frames, pushing the encoded video images to a cdn network, and generating the picture data of the live audio and video.
Further, the obtaining video image frames of a plurality of paths of videos to be promoted includes:
detecting a plurality of paths of videos to be pushed sent to a local area, and acquiring video image frames corresponding to each path of videos of the plurality of paths of videos received at the same time.
Further preferably, the expanding the video image frames into a one-dimensional array includes:
acquiring a two-dimensional array corresponding to video image frames of each path of video;
the two-dimensional data is unfolded into a one-dimensional array with a row connected end to end.
Further preferably, after the splicing the one-dimensional array conversion corresponding to the video, a frame of mixed image frame is generated, including:
and (3) after the one-dimensional arrays corresponding to the video frames of each path of video are connected end to end, generating a frame of mixed image frame.
Preferably, after the one-dimensional arrays corresponding to the video frames of each path of video are connected end to end, a frame of mixed image frame is generated, including:
after the one-dimensional arrays corresponding to the video frames of each path of video are connected end to end, a mixed image array is generated;
and adding mixed flow parameters to the mixed image array to generate a frame of mixed image frame.
Further, the adding the mixed flow parameter to the mixed image array includes:
and adding mixed stream parameters to the head of the mixed image array, wherein the mixed stream parameters comprise the number of video streams of mixed streams and the image resolution corresponding to the video streams.
Further, the pushing the encoded video image to the cdn network, generating the live audio-video picture data includes:
pulling the mixed image frames, and carrying out image segmentation on the mixed image frames according to mixed flow parameters;
splitting and restoring the segmented image frames to generate the picture data of the live audio and video.
Another embodiment of the present invention provides an encoding apparatus for audio and video live broadcast, the apparatus including:
the image frame acquisition module is used for acquiring video image frames of a plurality of paths of videos to be pushed;
the image frame mixing module is used for expanding the video image frames into one-dimensional arrays, and generating a frame of mixed image frame after splicing the one-dimensional arrays corresponding to the video;
and the plug flow module is used for encoding the mixed image frames, pushing the encoded video images to a cdn network and generating the picture data of the live audio and video.
Another embodiment of the present invention provides an encoding apparatus for live audio-visual broadcast, the apparatus comprising at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the encoding method for live audio video described above.
Another embodiment of the present invention also provides a non-volatile computer-readable storage medium storing computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform the encoding method for live audio and video as described above.
The beneficial effects are that: the embodiment of the invention does not directly splice the images, but splices the images after converting the images into one-dimensional data, thereby saving the data waste generated in the splicing process of the images and reducing the flow and bandwidth in the transmission process.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a prior art live interface schematic;
FIG. 2 is a flowchart of a method for encoding audio/video live broadcast according to a preferred embodiment of the present invention;
fig. 3 is a flowchart of a specific application embodiment of an encoding method for live audio and video broadcast according to the present invention;
fig. 4 is a schematic functional block diagram of a coding apparatus for live audio and video according to a preferred embodiment of the present invention;
fig. 5 is a schematic hardware structure of a coding device for live audio and video according to a preferred embodiment of the present invention.
Detailed Description
The present invention will be described in further detail below in order to make the objects, technical solutions and effects of the present invention more clear and distinct. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. Embodiments of the present invention are described below with reference to the accompanying drawings.
The embodiment of the invention provides a coding method for audio and video live broadcasting. Referring to fig. 2, fig. 2 is a flowchart of a coding method for live audio and video according to a preferred embodiment of the present invention. As shown in fig. 2, it includes the steps of:
step S100, obtaining video image frames of a plurality of paths of videos to be pushed;
step S200, expanding the video image frames into one-dimensional arrays, and splicing the one-dimensional arrays corresponding to the video to generate a frame of mixed image frame;
and step S300, encoding the mixed image frames, pushing the encoded video images to a cdn network, and generating the picture data of the live audio and video.
In specific implementation, under the condition that the original cdn (Content Delivery Network, content distribution network) live broadcast plug flow mode is unchanged, the multi-channel video stream is not subjected to image splicing, each channel of video stream image frames are unfolded from two-dimensional images to form a one-dimensional array, the one-dimensional arrays of the multi-channel image frames are spliced to generate a frame of mixed image frame, coding is carried out after each frame of mixed image frame is mixed, plug flow is carried out to cdn for live broadcast, and accordingly, data waste generated in the splicing process of images can be saved, and flow and bandwidth in the transmission process are reduced. Cdn is an intelligent virtual network built on the basis of the existing network, and by means of the edge servers deployed in various places, a user can obtain required content nearby through load balancing, content distribution, scheduling and other functional modules of a central platform, network congestion is reduced, user access response speed and hit rate are improved, and convenience is brought to live broadcast of the user.
Still further, as shown in fig. 3, in an embodiment of the present invention, a live broadcast terminal obtains a video stream of a main broadcast camera and other video streams, obtains an image frame of the main broadcast camera according to the video stream of the main broadcast camera, performs one-dimensional expansion to generate an image data array corresponding to the main broadcast camera, obtains other video image frames according to the other video streams, performs one-dimensional expansion to generate an image data array corresponding to the other video image frames according to the other video image frames, splices the image data array corresponding to the main broadcast camera with the image data array corresponding to the other video image frames, generates a mixed data stream, encodes the mixed data stream after adding header information, and sends encoded data to a cdn network to generate a live broadcast picture.
Further, acquiring video image frames of a plurality of paths of videos to be promoted, including:
detecting a plurality of paths of videos to be pushed sent to a local area, and acquiring video image frames corresponding to each path of videos of the plurality of paths of videos received at the same time.
In implementation, on the basis of live broadcasting of a traditional live broadcast stream through a cdn network, a video stream of a main broadcasting camera and other video streams are respectively pushed to cdn as two independent video streams, and then a user simultaneously pulls multiple streams to a terminal for watching, so that a problem that two paths of videos are not synchronized may exist. The embodiment of the invention locally acquires multiple paths of video image frames before live broadcast equipment pushes, and prepares for subsequent image frame processing.
Further, expanding the video image frames into a one-dimensional array, comprising:
acquiring a two-dimensional array corresponding to video image frames of each path of video;
the two-dimensional data is unfolded into a one-dimensional array with a row connected end to end.
In the implementation, before live broadcast equipment stream pushing, multiple paths of video image frames are firstly expanded from two dimensions into a one-dimensional array locally, then the expanded multiple paths of image arrays are transversely connected in series, specifically, the image frame of each path of video is a two-dimensional image, the image consists of pixels which are transversely and vertically arranged, and the process of expanding the pixels of the two-dimensional image from two dimensions into a one-dimensional pixel array which is formed by connecting a row of pixels end to end is the process of expanding the pixels.
And then, connecting each video image frame which is unfolded in one dimension end to end, namely, transversely connecting multiple image one-dimensional arrays in series. .
Further, after the one-dimensional array conversion corresponding to the video is spliced, a frame of mixed image frame is generated, which comprises the following steps:
and (3) after the one-dimensional arrays corresponding to the video frames of each path of video are connected end to end, generating a frame of mixed image frame.
In the implementation, each path of video stream image frames are unfolded into one-dimensional arrays from two-dimensional images, the one-dimensional arrays of the plurality of paths of image frames are transversely connected in series, and a mixed image frame formed by mixing the plurality of paths of video stream images is generated.
Further, after the one-dimensional arrays corresponding to the video frames of each path of video are connected end to end, a frame of mixed image frame is generated, which comprises the following steps:
after the one-dimensional arrays corresponding to the video frames of each path of video are connected end to end, a mixed image array is generated;
and adding mixed flow parameters to the mixed image array to generate a frame of mixed image frame.
In the implementation, after the images are connected end to end, mixed stream parameters which only comprise the mixed image frame data stream of each image frame and the mixed data stream are generated, the mixed stream is encoded and then pushed to cdn for live broadcasting, and finally, only one mixed video stream is pushed to cdn.
Further, adding a mixed stream parameter to the mixed image array includes:
and adding mixed stream parameters to the head of the mixed image array, wherein the mixed stream parameters comprise the number of video streams of mixed streams and the image resolution corresponding to the video streams.
In specific implementation, each frame of mixed image frame data stream starts with header information, the header information comprises mixed stream parameters of mixed data streams besides fixed starting information, and the mixed stream parameters comprise data such as the number of video streams, the frame resolution of each path of video stream, and the like.
Further, pushing the encoded video image to a cdn network to generate audio and video live broadcast picture data, including:
pulling the mixed image frames, and carrying out image segmentation on the mixed image frames according to mixed flow parameters;
splitting and restoring the segmented image frames to generate the picture data of the live audio and video.
In the implementation, the user side pulls the mixed stream, then segments the mixed stream image frames according to the header information in the mixed stream, and then splits and restores the image frames according to the data in the header information. Therefore, the problem of delay and non-synchronization of video time when cdn independently pushes two paths of streams is solved, and meanwhile, no additional non-information image area is generated and no additional transmission flow and bandwidth are consumed because the mixed images are not spliced on the two-dimensional images.
For example, the mixed image frame data stream may be sliced by inserting header information, where the header information further includes the number of video streams in the mixed video stream and the resolution of each video stream frame, for example, the total number of video streams is 3, and the resolutions of the video stream frames are w1 h1, w2 h2, and w3 h3 respectively, then the data length of the first (w 1×h1×single pixel occupies the data length) data stream is a one-dimensional expansion array of the first video stream frame, the data length of the subsequent (w 2×h2×single pixel occupies the data length) data length is a one-dimensional expansion array of the second video stream frame, and the data length of the subsequent (w 3×h3×single pixel occupies the data length) data length is a one-dimensional expansion array of the third video stream frame.
As can be seen from the above method embodiments, the present invention provides a coding method for live audio and video, in which, under the condition that the original cdn live broadcast pushing mode is unchanged, multiple video streams are not spliced, but each frame of mixed image frame data stream starts with header information, the header information includes related information (including the number of video streams, the frame resolution of each path of video stream) of the mixed data stream in addition to fixed starting information, and the mixed stream is coded and then pushed to cdn for live broadcast, so that the data waste generated in the splicing process of the images can be saved, and the flow and bandwidth in the transmission process can be reduced.
It should be noted that, there is not necessarily a certain sequence between the steps, and those skilled in the art will understand that, in different embodiments, the steps may be performed in different orders, that is, may be performed in parallel, may be performed interchangeably, or the like.
Another embodiment of the present invention provides an encoding apparatus for audio/video live broadcast, as shown in fig. 4, an apparatus 1 includes:
the image frame acquisition module 11 is used for acquiring video image frames of a plurality of paths of videos to be pushed;
the image frame mixing module 12 is configured to expand the video image frames into one-dimensional arrays, and generate a frame of mixed image frame after the one-dimensional arrays corresponding to the video are spliced;
and the plug flow module 13 is used for encoding the mixed image frames, pushing the encoded video images to a cdn network and generating the picture data of the live audio and video.
The specific implementation is shown in the method embodiment, and will not be described herein.
Another embodiment of the present invention provides an encoding apparatus for live audio and video, as shown in fig. 5, the apparatus 10 includes:
one or more processors 110 and a memory 120, one processor 110 being illustrated in fig. 5, the processors 110 and the memory 120 being coupled via a bus or other means, the bus coupling being illustrated in fig. 5.
Processor 110 is used to complete the various control logic of device 10, which may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a single-chip microcomputer, ARM (Acorn RISC Machine) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. Also, the processor 110 may be any conventional processor, microprocessor, or state machine. The processor 110 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The memory 120 is used as a non-volatile computer readable storage medium, and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules, such as program instructions corresponding to the encoding method for live audio and video in the embodiment of the present invention. The processor 110 performs various functional applications of the device 10 and data processing, i.e. implements the encoding method for live audio video in the above-described method embodiments, by running non-volatile software programs, instructions and units stored in the memory 120.
The memory 120 may include a storage program area that may store an operating device, an application program required for at least one function, and a storage data area; the storage data area may store data created from the use of the device 10, etc. In addition, memory 120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 120 may optionally include memory located remotely from processor 110, which may be connected to device 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more units are stored in the memory 120 that, when executed by the one or more processors 110, perform the encoding method for live audio video in any of the method embodiments described above, e.g., perform method steps S100-S300 in fig. 2 described above.
Embodiments of the present invention provide a non-transitory computer-readable storage medium storing computer-executable instructions for execution by one or more processors, e.g., to perform the method steps S100-S300 of fig. 2 described above.
By way of example, nonvolatile storage media can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM may be available in many forms such as Synchronous RAM (SRAM), dynamic RAM, (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The disclosed memory components or memories of the operating environments described herein are intended to comprise one or more of these and/or any other suitable types of memory.
Another embodiment of the present invention provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a processor, cause the processor to perform the encoding method for live audio video of the above method embodiment. For example, the above-described method steps S100 to S300 in fig. 2 are performed.
The embodiments described above are merely illustrative, wherein elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Based on such understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the related art in the form of a software product, which may exist in a computer-readable storage medium such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method of the respective embodiments or some parts of the embodiments.
Conditional language such as "capable," "energy," "possible," or "may," among others, is generally intended to convey that a particular embodiment can include (but other embodiments do not include) particular features, elements, and/or operations unless specifically stated otherwise or otherwise understood within the context as used. Thus, such conditional language is also generally intended to imply that features, elements and/or operations are in any way required for one or more embodiments or that one or more embodiments must include logic for deciding, with or without input or prompting, whether these features, elements and/or operations are included or are to be performed in any particular embodiment.
What has been described herein in the present specification and figures includes examples of methods and apparatus capable of providing encoding for live audio video. It is, of course, not possible to describe every conceivable combination of components and/or methodologies for purposes of describing the various features of the present disclosure, but it may be appreciated that many further combinations and permutations of the disclosed features are possible. It is therefore evident that various modifications may be made thereto without departing from the scope or spirit of the disclosure. Further, or in the alternative, other embodiments of the disclosure may be apparent from consideration of the specification and drawings, and practice of the disclosure as presented herein. It is intended that the examples set forth in this specification and figures be considered illustrative in all respects as illustrative and not limiting. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
Claims (9)
1. A coding method for live audio and video, the method comprising:
acquiring video image frames of a plurality of paths of videos to be pushed;
expanding the video image frames into one-dimensional arrays, and generating a frame of mixed image frame after splicing the one-dimensional arrays corresponding to the video;
encoding the mixed image frames, pushing the encoded video images to a cdn network, and generating the picture data of the live audio and video;
the expanding the video image frames into a one-dimensional array includes:
acquiring a two-dimensional array corresponding to video image frames of each path of video;
the two-dimensional data is unfolded into a one-dimensional array with a row connected end to end.
2. The encoding method for live audio and video according to claim 1, wherein the obtaining video image frames of several paths of video to be promoted comprises:
detecting a plurality of paths of videos to be pushed sent to a local area, and acquiring video image frames corresponding to each path of videos of the plurality of paths of videos received at the same time.
3. The encoding method for live audio and video according to claim 2, wherein the generating a hybrid image frame after splicing the one-dimensional array conversion corresponding to the video includes:
and (3) after the one-dimensional arrays corresponding to the video frames of each path of video are connected end to end, generating a frame of mixed image frame.
4. The method for audio/video live broadcasting according to claim 3, wherein generating a frame of mixed image frame after the one-dimensional arrays corresponding to the video frames of each path of video are connected end to end comprises:
after the one-dimensional arrays corresponding to the video frames of each path of video are connected end to end, a mixed image array is generated;
and adding mixed flow parameters to the mixed image array to generate a frame of mixed image frame.
5. The encoding method for live audio and video according to claim 4, wherein adding mixed stream parameters to the mixed image array comprises:
and adding mixed stream parameters to the head of the mixed image array, wherein the mixed stream parameters comprise the number of video streams of mixed streams and the image resolution corresponding to the video streams.
6. The method for live audio and video encoding according to claim 5, wherein pushing the encoded video image to a cdn network generates live audio and video picture data, and the method comprises:
pulling the mixed image frames, and carrying out image segmentation on the mixed image frames according to mixed flow parameters;
splitting and restoring the segmented image frames to generate the picture data of the live audio and video.
7. An encoding device for live audio and video, the device comprising:
the image frame acquisition module is used for acquiring video image frames of a plurality of paths of videos to be pushed;
the image frame mixing module is used for expanding the video image frames into one-dimensional arrays, and generating a frame of mixed image frame after splicing the one-dimensional arrays corresponding to the video;
the plug flow module is used for encoding the mixed image frames, pushing the encoded video images to a cdn network and generating the picture data of the live audio and video;
the expanding the video image frames into a one-dimensional array includes:
acquiring a two-dimensional array corresponding to video image frames of each path of video;
the two-dimensional data is unfolded into a one-dimensional array with a row connected end to end.
8. An encoding device for live audio video, the device comprising at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the encoding method for live audio video as claimed in any one of claims 1-6.
9. A non-transitory computer-readable storage medium storing computer-executable instructions which, when executed by one or more processors, cause the one or more processors to perform the encoding method for live audio-video as claimed in any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110120084.2A CN112954370B (en) | 2021-01-28 | 2021-01-28 | Encoding method, device and equipment for audio and video live broadcast |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110120084.2A CN112954370B (en) | 2021-01-28 | 2021-01-28 | Encoding method, device and equipment for audio and video live broadcast |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112954370A CN112954370A (en) | 2021-06-11 |
CN112954370B true CN112954370B (en) | 2023-09-26 |
Family
ID=76238843
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110120084.2A Active CN112954370B (en) | 2021-01-28 | 2021-01-28 | Encoding method, device and equipment for audio and video live broadcast |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112954370B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105898302A (en) * | 2016-04-28 | 2016-08-24 | 上海斐讯数据通信技术有限公司 | Image transmission method and system based on compressed sensing |
CN107105315A (en) * | 2017-05-11 | 2017-08-29 | 广州华多网络科技有限公司 | Live broadcasting method, the live broadcasting method of main broadcaster's client, main broadcaster's client and equipment |
WO2018095174A1 (en) * | 2016-11-22 | 2018-05-31 | 广州华多网络科技有限公司 | Control method, device, and terminal apparatus for synthesizing video stream of live streaming room |
CN111479112A (en) * | 2020-06-23 | 2020-07-31 | 腾讯科技(深圳)有限公司 | Video coding method, device, equipment and storage medium |
CN111726634A (en) * | 2020-07-01 | 2020-09-29 | 成都傅立叶电子科技有限公司 | High-resolution video image compression transmission method and system based on FPGA |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3301920A1 (en) * | 2016-09-30 | 2018-04-04 | Thomson Licensing | Method and apparatus for coding/decoding omnidirectional video |
-
2021
- 2021-01-28 CN CN202110120084.2A patent/CN112954370B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105898302A (en) * | 2016-04-28 | 2016-08-24 | 上海斐讯数据通信技术有限公司 | Image transmission method and system based on compressed sensing |
WO2018095174A1 (en) * | 2016-11-22 | 2018-05-31 | 广州华多网络科技有限公司 | Control method, device, and terminal apparatus for synthesizing video stream of live streaming room |
CN107105315A (en) * | 2017-05-11 | 2017-08-29 | 广州华多网络科技有限公司 | Live broadcasting method, the live broadcasting method of main broadcaster's client, main broadcaster's client and equipment |
CN111479112A (en) * | 2020-06-23 | 2020-07-31 | 腾讯科技(深圳)有限公司 | Video coding method, device, equipment and storage medium |
CN111726634A (en) * | 2020-07-01 | 2020-09-29 | 成都傅立叶电子科技有限公司 | High-resolution video image compression transmission method and system based on FPGA |
Also Published As
Publication number | Publication date |
---|---|
CN112954370A (en) | 2021-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3562163B1 (en) | Audio-video synthesis method and system | |
US10856029B2 (en) | Providing low and high quality streams | |
CA2963765C (en) | Receiving device, transmitting device, and data processing method | |
CN107968790B (en) | Virtualization in Adaptive Stream Creation and Delivery | |
US9374410B2 (en) | System and method for seamless switchover between unicast and multicast sources of over-the-top streams | |
US20110138018A1 (en) | Mobile media server | |
CA2795694A1 (en) | Video content distribution | |
WO2011075548A1 (en) | Carriage systems encoding or decoding jpeg 2000 video | |
CN103442259A (en) | Method and device for reconstructing media data | |
DE202016008753U1 (en) | Gigabit Ethernet applicable networked video communication | |
CN103716681A (en) | Code stream switching method and electronic equipment | |
CN108494792A (en) | A kind of flash player plays the converting system and its working method of hls video flowings | |
CN109756744B (en) | Data processing method, electronic device and computer storage medium | |
US9204123B2 (en) | Video content generation | |
CN112312162A (en) | Video server for transmitting video stream | |
US20210021659A1 (en) | Delivery apparatus, delivery method, and program | |
CN112954370B (en) | Encoding method, device and equipment for audio and video live broadcast | |
US11711592B2 (en) | Distribution of multiple signals of video content independently over a network | |
CN117768687A (en) | Live stream switching method and device | |
CN111200562A (en) | Flow guiding method, static father node, edge node and CDN (content delivery network) | |
CN102082774B (en) | Stream media data playing method and system | |
US20190227866A1 (en) | Information processing device and method | |
CN113473163B (en) | Data transmission method, device, equipment and storage medium in network live broadcast process | |
JP2023007048A (en) | Streaming server, transmission method, and program | |
CN114125366A (en) | Multimedia conference processing method based on mobile edge computing server and related product thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |