[go: up one dir, main page]

CN115695918A - Multi-camera broadcast guide control method and device, readable storage medium and terminal equipment - Google Patents

Multi-camera broadcast guide control method and device, readable storage medium and terminal equipment Download PDF

Info

Publication number
CN115695918A
CN115695918A CN202310012321.2A CN202310012321A CN115695918A CN 115695918 A CN115695918 A CN 115695918A CN 202310012321 A CN202310012321 A CN 202310012321A CN 115695918 A CN115695918 A CN 115695918A
Authority
CN
China
Prior art keywords
video frame
queue
camera
frame data
pts
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310012321.2A
Other languages
Chinese (zh)
Other versions
CN115695918B (en
Inventor
陈少泽
黄爱兵
侯俊晖
熊凯
唐伊强
江昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Bingo Information Technology Co ltd
Original Assignee
Nanchang Bingo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Bingo Information Technology Co ltd filed Critical Nanchang Bingo Information Technology Co ltd
Priority to CN202310012321.2A priority Critical patent/CN115695918B/en
Publication of CN115695918A publication Critical patent/CN115695918A/en
Application granted granted Critical
Publication of CN115695918B publication Critical patent/CN115695918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A multi-camera director control method, a multi-camera director control device, a readable storage medium and terminal equipment are provided, the method comprises the steps of acquiring a video frame data packet of a director camera in real time and acquiring pts information of the video frame data packet of a main camera when a director instruction is acquired; the method comprises the steps of importing an acquired video frame data packet of a director camera into a video frame queue, and importing pts information of the acquired video frame data packet of a main camera into a pts queue; extracting a pts information and a video frame data packet from the pts queue and the video frame queue respectively according to the queue sequence, and assigning the extracted pts information to the extracted video frame data packet; and pushing the video frame data packet endowed with pts information to a streaming media server. The invention can reduce the energy consumption of the equipment and improve the response speed and efficiency.

Description

Multi-camera broadcast guide control method and device, readable storage medium and terminal equipment
Technical Field
The invention relates to the technical field of internet, in particular to a multi-camera director control method, a multi-camera director control device, a readable storage medium and terminal equipment.
Background
With the rapid development of the internet, the education mode also changes revolutionarily, and the new learning mode of the internet-based online education is more and more widely applied. The mobile live broadcast is widely applied to daily activity values of education activities, education research, campus activities and the like.
Present mobile live broadcast can adopt many cameras to carry out, and the image of different visual angles is gathered to each camera, can realize carrying out the broadcast of many visual angles through switching the camera. When multichannel camera is directed to broadcast, need decode the show respectively with the image of each camera, when selecting certain camera picture to export, carry out video coding to this picture, the retransmission is gone out. Therefore, the device needs to decode and encode, and the video encoding usually has high requirements and consumption on the device, resulting in short endurance, slow response and low efficiency of the device.
Disclosure of Invention
In view of the above, it is desirable to provide a multi-camera director control method, apparatus, readable storage medium and terminal device, so as to reduce the energy consumption of the device and improve the response speed and efficiency.
A multi-camera director control method comprises,
when a broadcasting instruction is acquired, acquiring a video frame data packet of a broadcasting-directing camera in real time and acquiring pts information of the video frame data packet of a main camera, wherein the broadcasting-directing camera is a camera for conducting current picture broadcasting, the main camera is a camera selected from a plurality of cameras in advance, and the video frame data packet is video frame data coded by the camera;
the method comprises the steps that an obtained video frame data packet of a broadcasting guide camera is led into a video frame queue, and pts information of the obtained video frame data packet of a main camera is led into a pts queue;
extracting a pts information and a video frame data packet from the pts queue and the video frame queue according to the queue sequence, and assigning the extracted pts information to the extracted video frame data packet;
and pushing the video frame data packet endowed with pts information to a streaming media server.
Further, in the multi-camera director control method, the step of importing the obtained video frame data packet of the director camera into the video frame queue includes:
after the broadcasting instruction is obtained, the obtained video frame of the broadcasting camera is led into a video frame queue from the video frame data packet of the first key frame of the broadcasting camera is received.
Further, the multi-camera director control method further includes:
monitoring the lengths of the video frame queue and the pts queue;
and when the length of the video frame queue is greater than that of the pts queue, performing frame loss processing on the video frame queue.
Further, in the method for controlling a multi-camera director, the step of performing frame loss processing on the video frame queue includes:
judging whether the length difference value of the video frame queue and the pts queue is smaller than or equal to a threshold value;
if so, replacing the video frame data packet of the key frame of the director camera with the video frame data packet of the last frame of the video frame queue when the video frame data packet of the key frame of the director camera is obtained;
if not, when the video frame data packets of the key frame of the director camera are acquired, deleting all the video frame data packets in the current video frame queue, and starting to import the video frame data packets of the key frame of the director camera.
Further, the method for controlling multi-camera director, wherein the step of monitoring the lengths of the video frame queue and the pts queue further comprises the following steps:
and when the length of the pts queue is greater than that of the video frame queue and the difference value between the lengths of the pts queue and the video frame queue is less than or equal to a threshold value, performing frame supplementing processing on the video frame queue.
Further, in the method for controlling a multi-camera director, the step of performing frame supplementing processing on the video frame queue includes:
when the video frame data packet of the key frame of the director camera is acquired, the video frame data packet of the key frame of the director camera is copied, and the video frame data packet of the key frame of the director camera and the copied video frame data packet are led into the video frame queue.
Further, the method for controlling multi-camera director, wherein the step of monitoring the lengths of the video frame queue and the pts queue further comprises the following steps:
and when the length of the pts queue is greater than that of the video frame queue and the difference value between the lengths of the pts queue and the video frame queue is greater than a threshold value, deleting all pts information in the pts queue currently.
Further, in the multi-camera director control method, when a director instruction is obtained, audio data sent by the main camera is also obtained in real time;
the step of pushing the video frame data packet with the pts information to the streaming media server further includes:
and pushing the audio data of the main camera to a streaming media server.
Further, the method for controlling a multi-camera director, wherein the step of acquiring the video frame data packet of the director camera in real time and acquiring the pts information of the video frame data packet of the main camera further includes:
and acquiring video frame data packets of all the cameras in real time, decoding the video frame data packets, and then respectively displaying the video frame data packets on the current display interface, wherein the video frame data packets comprise the video frame data packets and the corresponding pts information.
The invention also discloses a multi-camera director control device, which comprises,
the acquisition module is used for acquiring a video frame data packet of a broadcasting guide camera in real time and acquiring pts information of the video frame data packet of a main camera when a broadcasting guide instruction is acquired, wherein the broadcasting guide camera is a camera for conducting current picture broadcasting, the main camera is a camera selected from a plurality of cameras in advance, and the video frame data packet is video frame data coded by the camera;
the import module is used for importing the acquired video frame data packet of the director camera into a video frame queue and importing the pts information of the acquired video frame data packet of the main camera into a pts queue;
the extraction module is used for respectively extracting a pts information and a video frame data packet from the pts queue and the video frame queue according to the queue sequence, and assigning the extracted pts information to the extracted video frame data packet;
and the pushing module is used for pushing the video frame data packet endowed with pts information to the streaming media server.
Further, the above-mentioned many cameras director controlling means still includes:
the monitoring module is used for monitoring the lengths of the video frame queue and the pts queue;
and the frame loss processing module is used for performing frame loss processing on the video frame queue when the length of the video frame queue is greater than that of the pts queue.
Further, the above-mentioned many cameras director controlling means still includes:
and the frame supplementing processing module is used for performing frame supplementing processing on the video frame queue when the length of the pts queue is greater than that of the video frame queue and the difference value between the lengths of the pts queue and the video frame queue is less than or equal to a threshold value.
Further, the above-mentioned many cameras director controlling means still includes:
and the deleting module is used for deleting all pts information in the pts queue currently when the length of the pts queue is greater than that of the video frame queue and the difference value between the lengths of the pts queue and the video frame queue is greater than a threshold value.
Further, the above-mentioned many cameras director controlling means still includes:
and the display module is used for acquiring video frame data packets of all the cameras in real time, decoding the video frame data packets and then respectively displaying the decoded video frame data packets on the current display interface, wherein the video frame data packets comprise the video frame data packets and the corresponding pts information.
The invention also discloses a terminal device, which comprises a memory and a processor, wherein the memory stores programs, and the programs are executed by the processor to realize any one of the multi-camera director control methods.
The invention also discloses a computer readable storage medium, on which a program is stored, which when executed by a processor implements any of the above-described multi-camera director control methods.
In the invention, the video data packet transmitted by the camera comprises the encoded video frame and the pts timestamp. In this embodiment, based on pts information of the main camera, the already encoded video frame data packet is multiplexed to form a new video stream, and the new video stream is pushed to the streaming media server. The terminal equipment does not need to recode the video image, so that the broadcasting guide efficiency is improved, and the energy consumption is reduced.
Drawings
Fig. 1 is a flowchart of a multi-camera director control method according to a first embodiment of the present invention;
fig. 2 is a flowchart of a multi-camera director control method in a second embodiment of the present invention;
fig. 3 is a block diagram of a multi-camera director control apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal device in an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
These and other aspects of embodiments of the invention will be apparent with reference to the following description and attached drawings. In the description and drawings, particular embodiments of the invention have been disclosed in detail as being indicative of some of the ways in which the principles of the embodiments of the invention may be practiced, but it is understood that the scope of the embodiments of the invention is not limited correspondingly. On the contrary, the embodiments of the invention include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
The multi-camera director control method can be applied to terminal equipment, such as a personal computer, a tablet computer, a mobile phone and the like. The terminal equipment is connected with the cameras, and all the cameras are set to be in the same video coding format, so that multi-angle image playing is achieved. The camera can adopt a network camera, and the terminal equipment and each camera can adopt a wired or wireless communication mode to be connected, so that the information interaction is realized. The terminal equipment can display the pictures of all the cameras in real time and select one picture as a live or recorded output picture. During live broadcasting or recording, a user can switch any one camera picture at will to be used as a live broadcasting picture and a recorded picture (the process is broadcasting guide).
In a specific embodiment, the multi-camera director control method can be applied to mobile terminal equipment, such as a tablet computer. This mobile terminal equipment passes through the personal WIFI of 5G with each camera to be connected, and connected mode convenient and fast can open the live anytime and anywhere. The mobile terminal equipment pushes live broadcast or recorded videos to a live broadcast platform, and the live broadcast platform pushes the videos to each user terminal.
Referring to fig. 1, a multi-camera broadcast guiding control method according to a first embodiment of the present invention includes steps S11 to S14.
And step S11, when the broadcasting instruction is acquired, acquiring the video frame data packet of the broadcasting guide camera in real time, and acquiring pts information of the video frame data packet of the main camera. The broadcasting guide camera is a camera for conducting current picture broadcasting guide, the main camera is a camera selected from a plurality of cameras in advance, and the video frame data packet is video frame data coded by the camera.
The direct broadcasting instruction is an instruction sent by a user to start live broadcasting or video recording, when the live broadcasting or the video recording starts, the direct broadcasting instruction is sent, and after the direct broadcasting instruction is received, video picture direct broadcasting can be carried out. Before the director starts, the terminal equipment firstly establishes communication connection with each camera and selects one camera as a main camera. The main camera is used for providing pts information and ensuring that all cameras adopt uniform pts information.
It can be understood that, it may be preset to specifically set which camera is set as the main camera, for example, the camera corresponding to the current director picture is defaulted as the main camera, or one camera is designated as the main camera.
After the terminal equipment establishes communication connection with each camera, the video frame data packet sent by each camera can be acquired. And each camera encodes the video frame data collected in real time and then sends the encoded video frame data to the terminal equipment. In the process of video coding, a camera needs to generate pts information for each frame of compressed video image, where pts refers to a timestamp and is a long string of integer numbers used to indicate a playing time point of a video frame. The player can know in what order and in what rhythm the video frames are played continuously through the pts information. And each video frame data packet sent to the terminal equipment is attached with corresponding pts information.
Since the terminal device is connected with a plurality of cameras, pts of network video streams of the cameras are usually inconsistent, for example, encoding time for starting the network video streams of the cameras is inconsistent, and timestamp generation rules for encoding are inconsistent. The cameras may be set to different frame rates, such as 25 frames per second for one camera and 30 frames per second for the other cameras. Moreover, even if the cameras set the same frame rate, there are slight differences (also set to 25 frames per second, but some cameras are 25.01 frames per second or 24.99 frames per second), and a huge difference is generated when the cameras accumulate for a long time. Moreover, the stability of the network connecting the camera and the terminal device cannot be completely guaranteed, and a network delay may occur. If the pts with inconsistent format and order, i.e. the video frames of the pts which are out of order, are spliced, the process will fail when the video stream is sent to the streaming media server or the local file is recorded. Even if the file can be stored, the file cannot be played in sequence and normally and continuously at the playing end, and the video and the sound cannot be synchronously coordinated.
Based on this, the algorithm strategy of this embodiment is to use the pts time sequence of one of the cameras as the time stamp sequence of the live-broadcast/recorded-broadcast output stream, and simultaneously ensure that the multiple camera pictures are consistent with the pts time sequence.
The director camera can be set, one camera is defaulted to be the director camera, namely, a picture of the director camera is live broadcast or recorded, and in the live broadcast or recording process, a user can randomly switch one camera to be used as the director camera.
It can be understood that the main camera and the director camera can be the same camera or two different cameras.
And step S12, importing the acquired video frame data packet of the director camera into a video frame queue, and importing the pts information of the acquired video frame data packet of the main camera into a pts queue.
In this embodiment, after the user starts the director, the terminal device obtains the video frame data packet of the director camera and the pts information of the video frame data packet of the main camera in real time. And the video frame data packet of the director camera acquired in real time is imported into a video frame queue, and the pts information of the acquired video frame data packet of the main camera is imported into a pts queue. A queue is a first-in-first-out data structure, which can be understood as a data-in, data-out. Therefore, the data in the queue is taken out according to the stored sequence, and the disorder situation can not occur.
Furthermore, in order to make the live broadcast or recorded picture have better continuity, the broadcast is conducted when the video key frame I frame of the image of the broadcast guide camera needs to be received after the broadcast guide instruction is sent out, namely, the video frame data packet of the video key frame I frame is guided into the video frame queue, so that the broadcast guide picture can be pushed to each user side and kept watching fluency after being decoded.
In general, a camera only retains I frames and P frames of a video in terms of video encoding. The frame I is a video key frame and stores a frame of complete video image; the P frame is a preamble dependent prediction frame, and when an image is decoded, a previous frame of video needs to be calculated together so as to obtain a complete video image. Therefore, when the live broadcast or the recording is started, whether a key frame reaches or not needs to be judged, if yes, data are pressed into the video frame queue and the pts queue, and matched plug flow output is carried out. In addition, when the director camera is switched in the live broadcasting or recording process, the operation of pressing the video frame data packet of the original camera into the video frame queue is still maintained under the condition that the key frame of the new director camera does not arrive, and once the key frame of the new director camera is acquired, the video frame data packet of the new director camera is pressed into the video frame queue. During this time, the data in the pts queue is pushed continuously without interruption.
And step S13, extracting a pts information and a video frame data packet from the pts queue and the video frame queue respectively according to the queue sequence, and assigning the extracted pts information to the extracted video frame data packet.
And step S14, pushing the video frame data packet endowed with pts information to a streaming media server.
And respectively taking out one piece of information from the pts queue and the video frame queue in the live streaming or file recording process, assigning the taken-out pts information to the currently taken-out video frame data packet, and pushing. At this point, the elements of the pts queue and the video frame queue are reduced accordingly.
It should be noted that the video frame data packet pushed by the terminal device to the streaming media server is encoded and compressed data, that is, original data sent by the camera, and is directly sent to the streaming media server after being given pts information of the main camera. Therefore, the terminal equipment does not need to encode and compress the video data, the energy consumption is greatly reduced, the endurance is long, the response is fast, and the efficiency is high.
In this embodiment, the video data packet transmitted by the camera includes the encoded video frame and the pts timestamp. In this embodiment, based on pts information of the main camera, the already encoded video frame data packet is multiplexed to form a new video stream, and the new video stream is pushed to the streaming media server. The terminal equipment does not need to recode the video image, and the broadcasting guide efficiency is improved.
Further, in one embodiment of the present invention, the terminal device may also have an image display function, that is, a picture of each camera is displayed on the terminal device, so that a live broadcast or a recording person can watch the picture conveniently. The terminal equipment with the image display function receives video frame data packets (the video frame data packets comprise video frame data packets and corresponding pts information) transmitted by each camera during the broadcasting guide, decodes the video frame data packets and respectively displays the decoded video frame data packets on a current display interface. Namely, the terminal equipment needs to decode but not encode, and the video stream coded by the camera is used for outputting during the director.
Referring to fig. 2, a multi-camera broadcast guiding control method according to a second embodiment of the present invention includes steps S21 to S28.
And S21, when the broadcasting instruction is acquired, acquiring the video frame data packet of the broadcasting guide camera in real time, and acquiring pts information and audio data of the video frame data packet of the main camera. The broadcasting guide camera is a camera for conducting current picture broadcasting guide, the main camera is a camera selected from a plurality of cameras in advance, and the video frame data packet is encoded video frame data.
Step S22, importing the obtained video frame data packet of the director camera into a video frame queue, importing the pts information of the obtained video frame data packet of the main camera into a pts queue, and importing the obtained audio data of the main camera into an audio queue.
And step S23, extracting a pts information and a video frame data packet from the pts queue and the video frame queue respectively according to the queue order, and assigning the extracted pts information to the extracted video frame data packet.
And step S24, pushing the audio data in the audio queue and the video frame data packet endowed with pts information to the streaming media server.
When the audio is guided to be broadcast, the audio and the video are required to be pushed to the streaming media server at the same time, and when the audio is transmitted, the audio packet of the main camera is directly used. When the video is transmitted, pts information of a video frame of the main camera is pressed into a pts queue, and a video frame data packet of the director camera is pressed into a video frame queue. And respectively taking out a piece of information from a pts queue and a video frame queue in the live streaming or file recording process, assigning pts to video frames, and pushing together with audio data. The pts of the audio and the video come from the same camera, so that the audio and the video can be kept synchronous, and the audio and the video are not coded in the whole process, so that the CPU consumption of the terminal equipment is low, the memory usage is small, and the power consumption is low.
And S25, monitoring the lengths of the video frame queue and the pts queue.
Frame rate may fluctuate during the broadcasting process, and the frame rate fluctuation may be caused by unstable output of the hardware frame rate of the cameras, may be caused by delay due to unstable network, or may be caused by the fact that the frame rates set between the cameras are different. The video frame queue and pts queue lengths need to be monitored to ensure sound and picture synchronicity. Since the audio data of the main camera is always being output, the video pts and the audio pts of the main camera are matched, i.e., the picture and the sound are matched. Next, there are two situations, 1, if the size of the pts queue of the video data is continuously increased, it indicates that the picture matched with the audio is not output, that is, the frame rate of the picture of the director camera is low, or the situations of frame dropping, delay and the like occur, if the processing is not done, the sound and the picture will be asynchronous all the time; 2. if the size of the video frame queue is continuously increased, it indicates that the frame rate of the director camera is high, or the main camera has the situations of delaying, frame loss and frame dropping, that is, the received pictures to be output have no video pts and audio data corresponding to the pictures, so that some pictures can be dropped to achieve the effect of matching the video frame rate and the audio of the main camera.
If the video frame queue is larger than the size of the pts queue, it indicates that the frame rate of the video frame of the director camera is higher than the frame rate of the main camera, and the video frame queue needs to be cleaned. If the pts queue is larger than the size of the video frame queue, it indicates that the frame rate of the main camera is higher than that of the image input camera, and the pts queue needs to be cleaned or the video frame queue needs to be subjected to frame supplementing.
And S26, when the length of the video frame queue is greater than that of the pts queue, performing frame loss processing on the video frame queue.
Specifically, the step of performing frame loss processing includes:
judging whether the length difference value of the video frame queue and the pts queue is smaller than or equal to a threshold value;
if so, replacing the video frame data packet of the key frame of the director camera with the video frame data packet of the last frame of the video frame queue when the video frame data packet of the key frame of the director camera is obtained;
if not, when the video frame data packets of the key frame of the director camera are acquired, deleting all the video frame data packets in the current video frame queue, and starting to import the video frame data packets of the key frame of the director camera.
The threshold is, for example, 4, and when the video frame queue size is 1-4 larger than the pts queue size, in the process of importing the video frame data packet of the director camera into the video frame queue, when a key frame of a frame in the video stream of the director camera arrives, the video frame data packet of the key frame is used to replace the video frame data packet of the last frame in the queue. The data packet of the video frame data of the latest frame in the key frame which is dropped and arrives latest is imported to the video frame which is equivalent to the last frame in the video frame queue. The frame dropping operation is selected when the key frame arrives, so that all frames in the queue can be normally decoded in a forward dependence mode (P frames in the queue are decoded in a forward dependence mode), and the accuracy and the continuity of video decoding are guaranteed while the input frame rate is reduced.
When the video frame queue size is larger than the pts queue size by more than 4, it is indicated that the number of frames of the video has exceeded the pts number too much, because the frame rate of the leader camera is much higher than the frame rate of the main camera. In this case, in the process of importing the video frame data packet of the director camera into the queue, when a key frame of the video stream arrives, all images in the video frame queue are cleared, and the import of the video frame data packet of the key frame is started, so that the sound and the picture are quickly synchronized under the condition that the video can be normally decoded.
The video frame queue and the pts queue are consumed synchronously, that is, one frame of video data and one pts information are taken at the time of output. If the size of the video frame is increasing, the size of the pts queue must be 0, otherwise, the pts queue is increasing, which means that the video frame queue is unable to keep up with the video frame. In the case where the size of one queue is increasing, the total clearing strategy is to quickly reduce the size difference between the two queues to keep them synchronized.
And S27, when the length of the pts queue is greater than that of the video frame queue and the difference value between the lengths of the pts queue and the video frame queue is less than or equal to a threshold value, performing frame supplementing processing on the video frame queue.
The frame supplementing processing method comprises the steps of copying the video frame data packet of the key frame of the director camera when the video frame data packet of the key frame of the director camera is obtained, and importing the video frame data packet of the key frame of the director camera and the copied video frame data packet into the video frame queue.
That is, when the size of the pts queue is 1-4 larger than the size of the video frame queue, when the key frame of the director camera arrives, the video frame data packet of one key frame is copied, and the video frame data packets of two key frames are added into the video frame queue to perform the frame supplementing operation. And under the condition of ensuring that the input frame decoding is normal, improving the frame rate of the input frame rate, matching pts of the main camera and keeping the audio and video synchronization.
And step S28, when the length of the pts queue is greater than that of the video frame queue and the difference value between the lengths of the pts queue and the video frame queue is greater than a threshold value, deleting all pts information in the pts queue currently.
When the size of the pts queue is 4 larger than that of the video frame queue, all elements in the pts queue are cleared, and then the latest 1 pts information is imported. By the method, the pts queue and the video frame queue are quickly matched, so that sound and pictures are quickly synchronized.
The following description is given by taking a specific application scenario as an example, and the multi-camera director control method in the embodiment of the present invention includes steps S31 to S40.
S31, when the broadcasting is conducted, a video frame data packet of the current frame of the broadcasting guide camera is obtained, and pts information and audio data of the video frame data packet of the current frame of the main camera are obtained. The video frame data packet is video frame data coded by the camera.
And S32, comparing the lengths of the video frame queue and the pts queue.
And S33, when the lengths of the video frame queue and the pts queue are equal, introducing the acquired video frame data packet of the director camera into the video frame queue, introducing the pts information of the acquired video frame data packet of the main camera into the pts queue, and introducing the acquired audio data of the main camera into the audio queue.
And S34, when the length of the video frame queue is greater than that of the pts queue and the difference value between the lengths of the video frame queue and the pts queue is less than or equal to a threshold value, judging whether the obtained video frame data packet of the director camera is a video frame data packet of a key frame.
And S35, when the obtained video frame data packet of the director camera is the video frame data packet of the key frame, replacing the video frame data packet of the key frame of the director camera with the last frame video frame data packet in the video frame queue.
Namely, when the length of the video frame queue is greater than that of the pts queue and the difference value between the lengths of the video frame queue and the pts queue is less than or equal to the threshold value, frame dropping operation is carried out on the video frame queue when a key frame of the director camera arrives. It can be understood that, when the obtained video frame data packet of the director camera is not the video frame data packet of the key frame, the video frame data packet and the pts information are continuously added to the video frame queue and the pts queue, respectively, and the video frame data packet of the next frame of the director camera, and the pts information and the audio data of the video frame data packet of the next frame of the main camera are obtained, and the process returns to step S32.
Namely, when the length of the video frame queue exceeds the length of the pts queue by a certain amount, frame loss operation is carried out on the video frame queue when a key frame of the director camera arrives until the lengths of the video frame queue and the pts queue are equal.
And S36, when the length of the video frame queue is greater than that of the pts queue and the difference value between the lengths of the video frame queue and the pts queue is greater than a threshold value, judging whether the video frame data packet of the director camera is a video frame data packet of a key frame.
And S37, when the obtained video frame data packet of the director camera is a video frame data packet of a key frame, deleting all the video frame data packets in the current video frame queue, and importing the obtained video frame data packet of the director camera into the video frame queue.
It can be understood that, when the importing of the video frame data packet of the current frame of the director camera is completed, the importing of the video frame data packet of the next frame is continued, that is, after step S37, the video frame data packet of the next frame is continuously obtained, and the process returns to step S32 until the lengths of the video frame queue and the pts queue are equal.
When the length of the video frame queue is greater than that of the pts queue and the difference between the lengths of the video frame queue and the pts queue is greater than a threshold value, a frame loss operation is performed to a greater extent when a key frame of the director camera arrives. It can be understood that, when the obtained video frame data packet of the director camera is not the video frame data packet of the key frame, the video frame data packet and the pts information are continuously added to the video frame queue and the pts queue, respectively, and the video frame data packet of the next frame of the director camera, and the pts information and the audio data of the video frame data packet of the next frame of the main camera are obtained, and the process returns to step S32.
And S38, when the length of the pts queue is greater than that of the video frame queue and the difference value between the lengths of the pts queue and the video frame queue is less than or equal to a threshold value, judging whether the video frame data packet of the director camera is a video frame data packet of a key frame.
And S39, when the obtained video frame data packet of the director camera is the video frame data packet of the key frame, copying the video frame data packet of the key frame of the director camera, and importing the video frame data packet of the key frame of the director camera and the copied video frame data packet into the video frame queue.
It is understood that after step S39, the video frame data packet of the next frame is continuously obtained, and the process returns to step S32 until the lengths of the video frame queue and the pts queue are equal.
When the length of the pts queue is larger than that of the video frame queue and the difference value between the lengths of the pts queue and the video frame queue is smaller than or equal to the threshold value, when the current frame of the director camera is a key frame, frame supplementing operation is carried out on the video frame queue. It can be understood that, when the video frame data packet of the director camera is not the video frame data packet of the key frame, the video frame data packet and the pts information are continuously added to the video frame queue and the pts queue, respectively, and the video frame data packet of the next frame of the director camera, and the pts information and the audio data of the video frame data packet of the next frame of the main camera are obtained, and the process returns to step S32.
And S40, when the length of the pts queue is greater than that of the video frame queue and the difference value between the lengths of the pts queue and the video frame queue is greater than a threshold value, deleting all pts information in the pts queue currently.
When the length of the pts queue is larger than that of the video frame queue and the difference value between the lengths of the video frame queue and the pts queue is larger than a threshold value, all information in the current pts queue is deleted, and 1 piece of latest acquired pts information is led into the pts queue, so that sound and pictures are quickly synchronized, and at the moment, video frames in the video frame queue and audio in the audio queue are led in normally. After the director command is issued, data needs to be pushed to the video frame queue and the pts queue from the key frame. Namely, the director action is pressed into the video frame queue from the video key frame I frame, so that the director picture can keep smooth after being decoded.
In the broadcasting guide process, the terminal equipment respectively extracts pts information and a video frame data packet from the pts queue and the video frame queue according to the queue sequence, and assigns the extracted pts information to the extracted video frame data packet. And then, pushing the audio data in the audio queue and the video frame data packet endowed with pts information again to the streaming media server.
Referring to fig. 3, a multi-camera director control apparatus according to a third embodiment of the present invention includes,
the acquisition module 31 is configured to acquire a video frame data packet of a director camera in real time and acquire pts information of the video frame data packet of a main camera when a director instruction is acquired, where the director camera is a camera for performing current picture director, the main camera is a camera selected from a plurality of cameras in advance, and the video frame data packet is video frame data encoded by the camera;
the import module 32 is configured to import the obtained video frame data packet of the director camera into a video frame queue, and import the pts information of the obtained video frame data packet of the main camera into a pts queue;
an extracting module 33, configured to extract a pts information and a video frame data packet from the pts queue and the video frame queue according to a queue order, and assign the extracted pts information to the extracted video frame data packet;
and the pushing module 34 is configured to push the video frame data packet with the pts information to the streaming media server.
Further, the above-mentioned many cameras director controlling means still includes:
the monitoring module is used for monitoring the lengths of the video frame queue and the pts queue;
and the frame loss processing module is used for performing frame loss processing on the video frame queue when the length of the video frame queue is greater than that of the pts queue.
Further, the above-mentioned many cameras director controlling means still includes:
and the frame supplementing processing module is used for performing frame supplementing processing on the video frame queue when the length of the pts queue is greater than that of the video frame queue and the difference value between the lengths of the pts queue and the video frame queue is less than or equal to a threshold value.
Further, above-mentioned many cameras instructor in broadcasting controlling means still includes:
and the deleting module is used for deleting all pts information in the pts queue currently when the length of the pts queue is greater than that of the video frame queue and the difference value between the lengths of the pts queue and the video frame queue is greater than a threshold value.
Further, the above-mentioned many cameras director controlling means still includes:
and the display module is used for acquiring video frame data packets of all the cameras in real time, decoding the video frame data packets and then respectively displaying the decoded video frame data packets on the current display interface, wherein the video frame data packets comprise the video frame data packets and the corresponding pts information.
The multi-camera director control device provided by the embodiment of the invention has the same realization principle and technical effect as the method embodiment, and for brief description, the corresponding content in the method embodiment can be referred to where the device embodiment is not mentioned.
Referring to fig. 4, a terminal device according to an embodiment of the present invention is shown, which includes a processor 10, a memory 20, and a computer program 30 stored in the memory and executable on the processor, where the processor 10 implements the multi-camera director control method when executing the computer program 30.
The terminal device may be, but is not limited to, a personal computer, a mobile phone, and other computer devices. The processor 10 may be, in some embodiments, a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip for executing program codes stored in the memory 20 or Processing data.
The memory 20 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 20 may in some embodiments be an internal storage unit of the terminal device, for example a hard disk of the terminal device. The memory 20 may also be an external storage device of the terminal device in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device. Further, the memory 20 may also include both an internal storage unit of the terminal device and an external storage device. The memory 20 may be used not only to store application software installed in the terminal device, various types of data, and the like, but also to temporarily store data that has been output or is to be output.
Optionally, the terminal device may further comprise a user interface, a network interface, a communication bus, etc., the user interface may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface may further comprise a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the terminal device and for displaying a visualized user interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), typically used to establish a communication link between the device and other electronic devices. The communication bus is used to enable connection communication between these components.
It should be noted that the configuration shown in fig. 4 does not constitute a limitation of the terminal device, and in other embodiments the terminal device may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
The present invention also proposes a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the multi-camera director control method as described above.
Those of skill in the art will understand that the logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be viewed as implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus (e.g., a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or execute the instructions). For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent should be subject to the appended claims.

Claims (12)

1. A multi-camera director control method is characterized by comprising the following steps,
when a broadcasting instruction is acquired, acquiring a video frame data packet of a broadcasting-directing camera in real time and acquiring pts information of the video frame data packet of a main camera, wherein the broadcasting-directing camera is a camera for conducting current picture broadcasting, the main camera is a camera selected from a plurality of cameras in advance, and the video frame data packet is video frame data coded by the camera;
the method comprises the steps of importing an acquired video frame data packet of a director camera into a video frame queue, and importing pts information of the acquired video frame data packet of a main camera into a pts queue;
extracting a pts information and a video frame data packet from the pts queue and the video frame queue respectively according to the queue sequence, and assigning the extracted pts information to the extracted video frame data packet;
and pushing the video frame data packet endowed with pts information to a streaming media server.
2. The multi-camera director control method of claim 1, wherein said step of importing the acquired video frame data packets of the director camera into a video frame queue comprises:
after the director instruction is obtained, the obtained video frame of the director camera is imported into a video frame queue from the video frame data packet of the first key frame of the director camera.
3. The multi-camera director control method of claim 1, further comprising:
monitoring the lengths of the video frame queue and the pts queue;
and when the length of the video frame queue is greater than that of the pts queue, performing frame loss processing on the video frame queue.
4. The multi-camera director control method of claim 3, wherein said step of performing frame loss processing on said video frame queue comprises:
judging whether the length difference value of the video frame queue and the pts queue is smaller than or equal to a threshold value;
if so, replacing the video frame data packet of the key frame of the director camera with the video frame data packet of the last frame of the video frame queue when the video frame data packet of the key frame of the director camera is obtained;
if not, when the video frame data packets of the key frame of the director camera are acquired, deleting all the video frame data packets in the current video frame queue, and starting to import the video frame data packets of the key frame of the director camera.
5. The multi-camera director control method of claim 3, wherein said step of monitoring the lengths of said video frame queue and said pts queue is further followed by:
and when the length of the pts queue is greater than that of the video frame queue and the difference value between the lengths of the pts queue and the video frame queue is less than or equal to a threshold value, performing frame supplementing processing on the video frame queue.
6. The multi-camera director control method of claim 5, wherein said step of performing frame-filling processing on said video frame queue comprises:
when the video frame data packet of the key frame of the director camera is acquired, the video frame data packet of the key frame of the director camera is copied, and the video frame data packet of the key frame of the director camera and the copied video frame data packet are led into the video frame queue.
7. The multi-camera director control method of claim 3, wherein said step of monitoring the lengths of said video frame queue and said pts queue is further followed by:
and when the length of the pts queue is greater than that of the video frame queue and the difference value between the lengths of the pts queue and the video frame queue is greater than a threshold value, deleting all pts information in the pts queue currently.
8. The multi-camera director control method according to claim 1, wherein when a director instruction is obtained, audio data sent by the main camera is also obtained in real time;
the step of pushing the video frame data packet endowed with pts information to the streaming media server further comprises:
and pushing the audio data of the main camera to a streaming media server.
9. The multi-camera director control method of claim 1, wherein the step of obtaining in real-time video frame data packets of the director camera and obtaining pts information for the video frame data packets of the main camera further comprises, before the step of:
and acquiring video frame data packets of all the cameras in real time, decoding the video frame data packets, and then respectively displaying the video frame data packets on the current display interface, wherein the video frame data packets comprise the video frame data packets and the corresponding pts information.
10. A multi-camera director control device is characterized in that the device comprises,
the acquisition module is used for acquiring a video frame data packet of a broadcasting guide camera in real time and acquiring pts information of the video frame data packet of a main camera when a broadcasting guide instruction is acquired, wherein the broadcasting guide camera is a camera for conducting current picture broadcasting, the main camera is a camera selected from a plurality of cameras in advance, and the video frame data packet is video frame data coded by the camera;
the import module is used for importing the acquired video frame data packet of the director camera into a video frame queue and importing the pts information of the acquired video frame data packet of the main camera into a pts queue;
the extraction module is used for respectively extracting a pts information and a video frame data packet from the pts queue and the video frame queue according to the queue sequence, and assigning the extracted pts information to the extracted video frame data packet;
and the pushing module is used for pushing the video frame data packet endowed with pts information to the streaming media server.
11. A terminal device comprising a memory and a processor, the memory storing a program that when executed by the processor implements the multi-camera director control method of any one of claims 1-9.
12. A computer-readable storage medium on which a program is stored, the program implementing the multi-camera director control method according to any one of claims 1 to 9 when executed by a processor.
CN202310012321.2A 2023-01-05 2023-01-05 Multi-camera broadcast guide control method and device, readable storage medium and terminal equipment Active CN115695918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310012321.2A CN115695918B (en) 2023-01-05 2023-01-05 Multi-camera broadcast guide control method and device, readable storage medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310012321.2A CN115695918B (en) 2023-01-05 2023-01-05 Multi-camera broadcast guide control method and device, readable storage medium and terminal equipment

Publications (2)

Publication Number Publication Date
CN115695918A true CN115695918A (en) 2023-02-03
CN115695918B CN115695918B (en) 2023-04-18

Family

ID=85057373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310012321.2A Active CN115695918B (en) 2023-01-05 2023-01-05 Multi-camera broadcast guide control method and device, readable storage medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN115695918B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007306110A (en) * 2006-05-09 2007-11-22 Matsushita Electric Ind Co Ltd Video / audio synchronization detection device and video / audio reproduction system with verification function
CN101946518A (en) * 2007-12-28 2011-01-12 诺基亚公司 Methods, apparatuses, and computer program products for adaptive synchronized decoding of digital video
CN103702013A (en) * 2013-11-28 2014-04-02 北京航空航天大学 Frame synchronization method for multiple channels of real-time videos
EP3032766A1 (en) * 2014-12-08 2016-06-15 Thomson Licensing Method and device for generating personalized video programs
US20170208220A1 (en) * 2016-01-14 2017-07-20 Disney Enterprises, Inc. Automatically synchronizing multiple real-time video sources
CN112866733A (en) * 2021-01-05 2021-05-28 广东中兴新支点技术有限公司 Cloud director synchronization system and method of multiple live devices
CN114079813A (en) * 2020-08-18 2022-02-22 中兴通讯股份有限公司 Picture synchronization method, encoding method, video playback device and video encoding device
CN114550067A (en) * 2022-02-28 2022-05-27 新华智云科技有限公司 A method, device, equipment and storage medium for automatic live broadcasting and directing of sports events
CN115348409A (en) * 2022-07-12 2022-11-15 视联动力信息技术股份有限公司 Video data processing method and device, terminal equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007306110A (en) * 2006-05-09 2007-11-22 Matsushita Electric Ind Co Ltd Video / audio synchronization detection device and video / audio reproduction system with verification function
CN101946518A (en) * 2007-12-28 2011-01-12 诺基亚公司 Methods, apparatuses, and computer program products for adaptive synchronized decoding of digital video
CN103702013A (en) * 2013-11-28 2014-04-02 北京航空航天大学 Frame synchronization method for multiple channels of real-time videos
EP3032766A1 (en) * 2014-12-08 2016-06-15 Thomson Licensing Method and device for generating personalized video programs
US20170208220A1 (en) * 2016-01-14 2017-07-20 Disney Enterprises, Inc. Automatically synchronizing multiple real-time video sources
CN114079813A (en) * 2020-08-18 2022-02-22 中兴通讯股份有限公司 Picture synchronization method, encoding method, video playback device and video encoding device
CN112866733A (en) * 2021-01-05 2021-05-28 广东中兴新支点技术有限公司 Cloud director synchronization system and method of multiple live devices
CN114550067A (en) * 2022-02-28 2022-05-27 新华智云科技有限公司 A method, device, equipment and storage medium for automatic live broadcasting and directing of sports events
CN115348409A (en) * 2022-07-12 2022-11-15 视联动力信息技术股份有限公司 Video data processing method and device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN115695918B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN110868600B (en) Target tracking video plug-flow method, display method, device and storage medium
CN105187850B (en) The method and apparatus that the information of encoded video data is provided and receives encoded video data
JP6562992B2 (en) Trick playback in digital video streaming
CN104394426B (en) Streaming Media speed playing method and device
CN112752115B (en) Live broadcast data transmission method, device, equipment and medium
WO2021147448A1 (en) Video data processing method and apparatus, and storage medium
CN107634930B (en) A kind of acquisition method and device of media data
CN110505522A (en) Processing method, device and the electronic equipment of video data
US8682139B2 (en) L-cut stream startup
CN104918123A (en) Method and system for playback of motion video
CN113225585B (en) Video definition switching method and device, electronic equipment and storage medium
CN115243074A (en) Video stream processing method and device, storage medium and electronic equipment
CN105978955A (en) Mobile video synchronization system, method and mobile terminal
CN114189711A (en) Video processing method and apparatus, electronic device, storage medium
CN112788360A (en) Live broadcast method, live broadcast device and computer program product
CN113207040A (en) Data processing method, device and system for video remote quick playback
CN112087642B (en) Cloud guide playing method, cloud guide server and remote management terminal
CN103929682B (en) Method and device for setting key frames in video live broadcast system
CN110139128B (en) Information processing method, interceptor, electronic equipment and storage medium
WO2004086765A1 (en) Data transmission device
CN115695918B (en) Multi-camera broadcast guide control method and device, readable storage medium and terminal equipment
KR20080064399A (en) MP4 demultiplexer and its operation method
CN112437316A (en) Method and device for synchronously playing instant message and live video stream
US20210400334A1 (en) Method and apparatus for loop-playing video content
WO2022100742A1 (en) Video encoding and video playback method, apparatus and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant