CN112689158B - Method, device, apparatus and computer-readable medium for processing video - Google Patents
Method, device, apparatus and computer-readable medium for processing video Download PDFInfo
- Publication number
- CN112689158B CN112689158B CN201910994382.7A CN201910994382A CN112689158B CN 112689158 B CN112689158 B CN 112689158B CN 201910994382 A CN201910994382 A CN 201910994382A CN 112689158 B CN112689158 B CN 112689158B
- Authority
- CN
- China
- Prior art keywords
- video
- image
- images
- preset
- frames
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000008859 change Effects 0.000 claims abstract description 11
- 238000004590 computer program Methods 0.000 claims description 9
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 238000012544 monitoring process Methods 0.000 description 6
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000006872 improvement Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Landscapes
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention discloses a method, a device, equipment and a computer readable medium for processing video, relating to the technical field of computers. The method comprises the steps of receiving collected videos, determining whether images in the videos change or not in a preset time period, selecting one or more frames of images from the videos if the images in the videos do not change in the preset time period, combining audio and time stamps of the videos to establish data packets of the videos, segmenting the videos into a preset number of images if the images in the videos change in the preset time period, combining the audio and time stamps of the videos to establish the data packets of the videos, and storing the data packets of the videos. The implementation mode can ensure video quality and save bandwidth resources.
Description
Technical Field
The present invention relates to the field of computer technology, and in particular, to a method, apparatus, device, and computer readable medium for processing video.
Background
Live video is increasingly integrated into life, and as the pixels of a camera are increased, high-quality live video is urgent. High definition video requires greater bandwidth and traffic, but bandwidth and traffic are limited and multiple users contend for limited bandwidth and traffic, greatly reducing the user's viewing experience.
Therefore, for live broadcasting of video, the compression ratio of the video can be continuously increased to meet the requirement of users, but the video quality and the compression ratio are inversely related, and the higher the compression ratio is, the more cannot be ensured.
In the process of realizing the invention, the inventor finds that at least the following problems exist in the prior art, namely, more bandwidth resources are required to be occupied in order to ensure the video quality.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, apparatus, device, and computer readable medium for processing video, which can ensure video quality and save bandwidth resources.
To achieve the above object, according to one aspect of an embodiment of the present invention, there is provided a method of processing video, including:
Receiving an acquired video;
determining whether the image in the video changes or not in a preset time period;
In a preset time period, if the images in the video are unchanged, selecting one or more frames of images from the video, and establishing a data packet of the video by combining the audio and the time stamp of the video;
In a preset time period, if the images in the video are changed, segmenting the video into a preset number of frame images, and establishing a data packet of the video by combining the audio and the time stamp of the video;
And storing the data packet of the video.
The determining whether the image in the video changes includes:
And encoding the video, comparing the encoding results of different image frames of the video, and determining whether the image in the video changes.
The determining whether the image in the video changes within the preset time period comprises the following steps:
and in a time period determined by presetting live broadcast time delay, determining whether the image in the video changes or not.
After the data packet of the video is stored, the method comprises the following steps:
Receiving a request for acquiring the video sent by a client, wherein the request for acquiring the video comprises a sequence number of the video requested by the client last time;
Acquiring a data packet corresponding to a next sequence number of the sequence number according to the sequence number;
and sending the data packet of the corresponding next video.
The selecting one or more frames of images from the video comprises:
one or more images are selected from the video in time order.
The segmenting the video into a preset number of frame images comprises the following steps:
and dividing the video into a preset number of frame images according to the preset live broadcast time delay and the human eye video frame rate.
According to a second aspect of an embodiment of the present invention, there is provided an apparatus for processing video, including:
The receiving module is used for receiving the collected video;
the determining module is used for determining whether the image in the video changes or not in a preset time period;
The processing module is used for selecting one or more frames of images from the video when the images in the video are unchanged within a preset time period, and establishing a data packet of the video by combining the audio and the time stamp of the video;
In a preset time period, if the images in the video are changed, segmenting the video into a preset number of frame images, and establishing a data packet of the video by combining the audio and the time stamp of the video;
And the storage module is used for storing the data packet of the video.
The determining module is specifically configured to encode the video, compare encoding results of different image frames of the video, and determine whether an image in the video changes.
According to a third aspect of an embodiment of the present invention, there is provided an electronic device that processes video, including:
one or more processors;
storage means for storing one or more programs,
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the methods as described above.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable medium having stored thereon a computer program which when executed by a processor implements a method as described above.
One embodiment of the above invention has the advantage or beneficial effect that because the acquired video is received, different technical schemes can be executed according to whether the image in the video changes. And in the preset time period, the video is segmented into a preset number of frame images, the audio and the time stamp of the video are combined, the data packet of the video is established, and the data packet of the video is stored. It can be seen that the images in the video, whether or not they change, need not send all the acquired video. The method can selectively send the images in the acquired video, so that the bandwidth resources can be saved while the video quality is improved.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic view of a scene of processing video according to an embodiment of the invention;
FIG. 2 is a schematic diagram of the main flow of a method of processing video according to an embodiment of the invention;
FIG. 3 is another scene diagram of processing video according to an embodiment of the invention;
FIG. 4 is a flow chart of requesting video according to an embodiment of the invention;
Fig. 5 is a schematic diagram of the main structure of an apparatus for processing video according to an embodiment of the present invention;
FIG. 6 is an exemplary system architecture diagram in which embodiments of the present invention may be applied;
fig. 7 is a schematic diagram of a computer system suitable for use in implementing an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the live video process, since bandwidth and traffic are limited, in order to improve the viewing experience of users, the compression ratio of video is continuously improved. The improvement of the compression ratio is difficult to ensure the video quality.
The video live broadcast has wide application scene, such as unmanned warehouse monitoring, machine room monitoring and the like. In the application scene, the video itself is small in change in most of the monitoring time, and even under the condition that the video is unchanged in a long time, live broadcasting is still realized by compressing the video, and the video occupies a large amount of bandwidth. Therefore, a lot of bandwidth resources are wasted in order to secure video quality.
In order to solve the technical problem that video quality is difficult to guarantee and bandwidth resources are wasted, the following technical scheme in the embodiment of the invention can be adopted.
Referring to fig. 1, fig. 1 is a schematic view of a scene of processing video according to an embodiment of the present invention, including a video source, a video server, a computer, and a mobile terminal.
The video source is for capturing video devices. As one example, the video source may be a high definition camera, or a cell phone camera. The video source sends the collected video to a video server for processing and saving.
The video server is coupled to the video source. The video source can send the collected video to the video server in a wireless transmission mode, such as WiFi, or in a wired transmission mode, such as optical fiber. It will be appreciated that the video sent by the video source to the video server is not processed by the video source.
The video server is used for processing and storing the video sent by the video source. In practical applications, the video server may process the received video according to the actual requirements. The video server is coupled to the computer and the mobile terminal.
The video server can receive a video request sent by the computer or the mobile terminal, and respond to the video request to feed back corresponding video to the computer or the mobile terminal.
It should be noted that the video server may be specifically divided into one or more sub-servers to implement the above functions.
Referring to fig. 2, fig. 2 is a schematic diagram of a main flow of a method for processing video according to an embodiment of the present invention, in a preset period of time, based on whether an image in the video changes, the video is processed accordingly, and a data packet of the video is created, so that bandwidth resources are saved while the video quality is ensured. As shown in fig. 2, the method specifically comprises the following steps:
S201, receiving the collected video.
The video source may send the captured video to a video server. And the video server carries out corresponding processing on the received and collected video. In the embodiment of the present invention, the execution subject of each step in fig. 2 may be a video server.
As one example, a video source may be provided in a warehouse for monitoring goods stored in the warehouse. Then the acquired video may be video in a warehouse. As another example, a video source may be provided in the room for monitoring equipment in the room. Then the video captured may be video in a room.
It is understood that receiving the captured video includes video in a variety of scenes. Wherein, after receiving the collected video, the collected video can be stored. As one example, the captured video is stored as a function of the time the captured video was received. As another example, the captured video may also be stored as a function of the time the video was captured. Of course, the captured video may also be stored in other ways.
S202, determining whether the image in the video changes or not in a preset time period.
A time period may be preset, and in the preset time period, it is determined whether an image in the video is changed. As an example, in the cell monitoring video, the preset time period is 13 to 13 minutes in noon, and in the preset time period, whether an image in the video changes is determined to determine whether call security to patrol is required.
In the embodiment of the invention, the time period, particularly the live broadcast time delay, is taken as an example for determining, and the embodiment of the invention can be exemplified.
In the live video broadcast process, due to the reasons of a network or a server, etc., the video collected by the video source cannot be directly sent to the client. The time period between the time point when the client receives the video and the time point when the video source uploads the video is called time delay.
In the embodiment of the invention, the live broadcast time delay can be preset according to the actual situation and/or the working state of the equipment. As one example, the live delay may be equal to 2 seconds.
Video is in particular made up of multiple frames of images. Within the preset live time delay, different technical schemes can be adopted based on whether each frame of image changes or not.
Whether the image in the video changes or not can be judged according to the following technical scheme.
In one embodiment of the present invention, a video may be encoded first, where each frame of image in the video has a corresponding encoding result. By comparing the encoding results of different image frames, whether the image in the video changes or not is determined. Any of the following encoding schemes, H.261, H.263, H.264, M-JPEG, and MPEG, may be used.
And determining that the images in the video are unchanged if the encoding results of the different image frames are inconsistent.
Referring to fig. 3, fig. 3 is another scene diagram of processing video according to an embodiment of the invention. The video source, the video coding server, the video time server, the video file server and the client.
The video encoding server may be used to encode video captured by a video source. And transmitting the encoded video to a video event server. The video event server determines whether an image in the video has changed by comparing the encoding results of the different image frames. Then, a data table of the video is stored in the video file server, and a data packet of the video is transmitted to the client.
In one embodiment of the invention, the video may be split into multiple frames of images, each frame of image being converted to a gray scale map. The gray level of each frame of image is subtracted from the gray level of the first frame of image, and whether the image in the video changes is judged based on whether the result of the subtraction is zero.
The result of the subtraction of the gray level image of each frame image and the gray level image of the first frame image is zero, the images in the video are unchanged, and the result of the subtraction of the gray level image of each frame image and the gray level image of the first frame image is not zero, so that the images in the video are changed.
In practical applications, even if the image is not changed, the result of subtraction of the gray scale of two frames of images which are not changed in practice is not zero because the irradiation light is changed. Then, the proportion of the subtraction result of the gray-scale image to zero may be set in advance. When the proportion of zero of the gray level images of the two frames of images in the video does not exceed the preset proportion, the images in the video are considered to be unchanged.
In the embodiment of the invention, other ways of determining whether the image in the video changes can be adopted.
The video event meter can be used for judging whether the image in the video changes or not. Illustratively, the video event meter is software based on real-time video image analysis, capable of identifying moving objects in real-time video. If yes, judging whether a person walks in the video. And determining that the image in the video is unchanged if the moving object does not appear in the video.
S203, selecting one or more frames of images from the video when the images in the video are unchanged within a preset time period, and establishing a data packet of the video by combining the audio and the time stamp of the video.
In one embodiment of the present invention, one or more images may be selected from the video if the images in the video have not changed within a preset period of time. The purpose of selecting the image is to replace the video.
And in the preset time period, the images in the video are unchanged, and the images in each frame of the video are almost the same. The video is transmitted, that is, a plurality of frame images included in the video are transmitted. And the images in the video are not changed, which means that the transmitted multi-frame images are all the same images. The same multi-frame image is transmitted through the limited bandwidth resource, which is certainly the waste of the bandwidth resource. Then the video may be replaced with one or more images.
In one embodiment of the present invention, one or more images may be selected from the video in time sequence without changing the images in the video within a preset period of time. Wherein a frame of image may be selected from the video in time order. In another embodiment, multiple frames of images may be selected from the video in a time sequence. As one example, a first frame image of a video and a second frame image of the video may be selected.
In a video file, not only a plurality of frame images but also audio are included. After receiving the video, the client plays the video by combining the multi-frame images and the audio in the video. To avoid wasting bandwidth resources, one or more frames of images may be selected from the video, and the audio and time stamps of the video are combined to create a data packet for the video.
As one example, although an image in a video does not change, one or more images may be selected from the video to replace the video, the video needs to be played at the client. Then, the data packets of the video may be created in conjunction with selecting one or more images from the video, the audio of the video, and the time stamp.
The time stamp is a correspondence of audio of the video to a point in time. And displaying one or more frames of images selected from the video at the client side at the same time of playing the corresponding audio of the video according to the time points related to the time stamps.
The data packets of the video, as compared to the video itself, include only the selected image or images of the video, the audio of the video, and the time stamp. The data volume of the data packets of the video is much smaller than the data volume of the video itself.
S204, in a preset time period, if the images in the video are changed, segmenting the video into a preset number of frame images, and combining the audio and the time stamp of the video to establish a data packet of the video.
In a preset time period, under the condition that the images in the video are determined to be changed, the video can be segmented into preset number of frame images, the segmented preset number of frame images are combined with the audio and the time stamp of the video, and then the data packet of the video is established.
In order to reduce the amount of data to send the video, the video may be split into a preset number of frame images. Replacing the video with the segmented preset number of frame images, and establishing a data packet of the video by combining the audio and the time stamp of the video.
In one embodiment of the invention, the video frame rate is used to measure a measure of the number of display frames. The video frame rate is measured in units of display frames per second. Due to the particular physiological structure of the human eye, it is considered coherent if the frame rate of the video being viewed is higher than 16. In other words, in the case where the frame rate of the video is higher than 16, the image seen can be regarded as the video.
And in a preset time period, determining that the images in the video are changed, and not needing to send all the images in the video to the client in order to save bandwidth resources. Considering the limitation of the human eye video frame rate, the human eye video frame rate is the highest frame rate of the video seen by the human eye, when the video frame rate is higher than the human eye video frame rate, the human eye is difficult to perceive the improvement of the video frame rate, and when the video frame rate is lower than the human eye video frame rate, the human eye is easy to perceive the reduction of the video frame rate.
As one example, the video frame rate is set to be higher than the human eye video frame rate, the human eye considers the video to be clear after watching the video, the video frame rate is set to be equal to the human eye video frame rate, the human eye still considers the video to be clear after watching the video, and the video definition is consistent with the definition that the video frame rate is higher than the human eye video frame rate. That is, when the video frame rate is equal to or higher than the human eye video frame rate, the human eye cannot perceive a change in the video frame rate.
The video frame rate is set to be smaller than the human eye video frame rate, and after the human eye views the video, the video definition is considered to be reduced compared with the case where the video frame rate is equal to the human eye video frame rate. That is, the human eye easily perceives a change in the video frame rate in the case where the video frame rate is smaller than the human eye video frame rate.
In the embodiment of the invention, the video can be segmented into a preset number of frame images according to the preset live broadcast time delay and the human eye video frame rate. As one example, a third party software development kit (Software Development Kit, SDK) may be invoked, such as OpenCV, to slice the video into a preset number of frame images, according to a preset live time delay and human eye video frame rate.
In particular, the human eye video frame rate may determine the minimum frame image required per second. According to the preset live broadcast time delay and the human eye video frame rate, the preset number of the segmented videos can be obtained. As one example, the preset number of split videos is equal to the product of the preset live delay and the human eye video frame rate. If the preset live time delay is equal to 2 seconds, the human eye video frame rate is equal to 32 frames/second, and the minimum frame number of the segmented video is equal to 2×32=64. It can be understood that the video acquired in the preset live time delay is segmented into 64 frames, so that the minimum bandwidth resource is occupied under the condition of guaranteeing the definition of the video.
Of course, the video acquired in the preset live time delay is segmented into at least 64 frames, and the video acquired in the preset live time delay can be segmented into at least more than 64 frames.
In one embodiment of the invention, the video is segmented into a preset number of frame images according to a preset live time delay and a human eye video frame rate. The video can be divided into a preset number of frame images on average on the basis of the preset live broadcast time delay and the human eye video frame rate so as to be played by a client.
As an example, if the preset live time delay is 2 seconds and the human eye video frame rate is 32 frames/second, the preset number of split videos is equal to 2×32=64. The acquired video is sliced into 64 frames. The time length can be divided into 64 parts by taking the preset live time delay as the time length. One frame is selected in each time length, and then 64 frames of images are acquired.
And establishing a data packet of the video by combining the audio and the time stamp of the video with the preset number of frame images obtained by segmenting the video.
And in the time point related to the time stamp, displaying the preset number of frame images obtained by segmentation from the video on the client side while playing the corresponding audio of the video according to the time point.
Compared with the video itself, the data packet of the video only comprises a preset number of frame images segmented in the video, the audio and the time stamp of the video. The data volume of the data packets of the video is much smaller than the data volume of the video itself.
S205, storing the data packet of the video.
The video server stores the data packets of the video and can send the data packets of the video to other servers or clients for the other servers or clients to play the video.
Referring to fig. 4, fig. 4 is a flow chart illustrating a video request according to an embodiment of the present invention. In fig. 4, the video server receives a request from the client to send a video, and the video server determines whether there is a latest video to feed back the client. The method specifically comprises the following steps:
s401, receiving a video request.
The client needs to play the video, and then a request for the video can be sent to the video server. One or more videos acquired from a video server are stored in a client. Each video has a corresponding sequence number.
As an example, video 1 has a sequence number of A002 and video 2 has a sequence number of A003. From the above sequence numbers, the acquisition time of video 1 is earlier than that of video 2. Video 1 and video 2 may be distinguished by sequence numbers.
In the embodiment of the invention, the client sends a request of the video to the video server, wherein the video request comprises a sequence number of the video requested by the client last time. In this way, the video server can determine whether or not there is the latest video based on the request of the video.
S402, judging whether the latest video exists according to the serial number.
The video server receives the request of the video sent by the client, and can judge whether the latest video exists or not based on the serial number.
First, the video server searches whether or not there is a video corresponding to a sequence number next to the sequence number based on the video stored in the video server itself. If there is a video corresponding to the next sequence number of the sequence number, S303 is executed.
S403, sending the corresponding data packet.
And the video server sends a data packet corresponding to the next serial number of the serial number to the client. That is, based on the sequence number in the request for the client to send the video, the data packet corresponding to the sequence number next to the sequence number is determined in the video server.
As an example, in the video server, a data packet corresponding to a next sequence number of the sequence number is determined. In this way, the client acquires the latest video by continuously sending video requests.
S404, returning to the null value.
The video server returns a null value to the client. And in the video server, if the data packet corresponding to the next sequence number of the sequence number does not exist, returning a null value to the client. The client can know that its own video is already the latest video.
In the above embodiment, the client sends the video request to the video server, so as to sequentially obtain the data packet corresponding to the next serial number of the serial numbers, thereby realizing live broadcast of the high-definition video.
Fig. 5 is a schematic diagram of a main structure of an apparatus for processing video according to an embodiment of the present invention, where the apparatus for processing video may implement a method for processing video, and as shown in fig. 5, the apparatus for processing video specifically includes:
a receiving module 501, configured to receive the collected video.
A determining module 502, configured to determine whether an image in the video changes within a preset period of time.
A storage module 503, configured to select one or more frames of images from the video if the images in the video are unchanged within a preset time period, and establish a data packet of the video by combining an audio and a time stamp of the video;
and in a preset time period, if the images in the video are changed, segmenting the video into a preset number of frame images, and combining the audio and the time stamp of the video to establish a data packet of the video.
A storage module 504, configured to store data packets of the video.
In one embodiment of the present invention, the determining module 502 is specifically configured to encode the video, compare encoding results of different image frames of the video, and determine whether an image in the video changes.
In one embodiment of the present invention, the determining module 502 is specifically configured to determine whether an image in the video changes within a time period determined by a preset live broadcast time delay.
In one embodiment of the present invention, the processing module 503 is further configured to receive a request sent by the client to obtain a video, where the request for the video includes a sequence number of a video that was last requested by the client;
acquiring a data packet corresponding to the next sequence number of the sequence number according to the sequence number;
The storage module 504 is further configured to store the corresponding data packet.
In one embodiment of the present invention, the processing module 503 is specifically configured to select one or more frames of images from the video in time sequence.
In one embodiment of the present invention, the processing module 503 is specifically configured to segment the video into a preset number of frame images according to a preset live broadcast time delay and a frame rate of the human eye video.
Fig. 6 illustrates an exemplary system architecture 600 of a method of processing video or an apparatus of processing video to which embodiments of the present invention may be applied.
As shown in fig. 6, the system architecture 600 may include terminal devices 601, 602, 603, a network 604, and a server 606. The network 604 is used as a medium to provide communication links between the terminal devices 601, 602, 603 and the server 606. The network 604 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 606 via the network 604 using the terminal devices 601, 602, 603 to receive or send messages or the like. Various communication client applications such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 601, 602, 603.
The terminal devices 601, 602, 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 606 may be a server providing various services, such as a background management server (by way of example only) providing support for shopping-type websites browsed by users using the terminal devices 601, 602, 603. The background management server may analyze and process the received data such as the product information query request, and feedback the processing result (e.g., the target push information, the product information—only an example) to the terminal device.
It should be noted that, the method for processing video according to the embodiment of the present invention is generally performed by the server 606, and accordingly, the device for processing video is generally disposed in the server 606.
It should be understood that the number of terminal devices, networks and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 7, there is illustrated a schematic diagram of a computer system 700 suitable for use in implementing an embodiment of the present invention. The terminal device shown in fig. 7 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU) 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the system 700 are also stored. The CPU 701, ROM 702, and RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Connected to the I/O interface 705 are an input section 707 including a keyboard, a mouse, and the like, an output section 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like, a storage section 708 including a hard disk, and the like, and a communication section 709 including a network interface card such as a LAN card, a modem, and the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 701.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, which may be described as, for example, a processor comprising a sending unit, an obtaining unit, a determining unit and a first processing unit. The names of these units do not constitute a limitation on the unit itself in some cases, and for example, the transmitting unit may also be described as "a unit that transmits a picture acquisition request to a connected server".
As a further aspect, the invention also provides a computer readable medium which may be comprised in the device described in the above embodiments or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to include:
Receiving an acquired video;
determining whether the image in the video changes or not in a preset time period;
In a preset time period, if the images in the video are unchanged, selecting one or more frames of images from the video, and establishing a data packet of the video by combining the audio and the time stamp of the video;
In a preset time period, if the images in the video are changed, segmenting the video into a preset number of frame images, and establishing a data packet of the video by combining the audio and the time stamp of the video;
And storing the data packet of the video.
According to the technical scheme provided by the embodiment of the invention, because the acquired video is received, different technical schemes can be executed according to whether the image in the video changes or not. And in the preset time period, the video is segmented into a preset number of frame images, the audio and the time stamp of the video are combined, the data packet of the video is established, and the data packet of the video is stored. It can be seen that the images in the video, whether or not they change, need not send all the acquired video. The method can selectively send the images in the acquired video, so that the video quality can be ensured, and meanwhile, the bandwidth resources are saved.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910994382.7A CN112689158B (en) | 2019-10-18 | 2019-10-18 | Method, device, apparatus and computer-readable medium for processing video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910994382.7A CN112689158B (en) | 2019-10-18 | 2019-10-18 | Method, device, apparatus and computer-readable medium for processing video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112689158A CN112689158A (en) | 2021-04-20 |
CN112689158B true CN112689158B (en) | 2024-12-10 |
Family
ID=75444977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910994382.7A Active CN112689158B (en) | 2019-10-18 | 2019-10-18 | Method, device, apparatus and computer-readable medium for processing video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112689158B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106559631A (en) * | 2015-09-30 | 2017-04-05 | 小米科技有限责任公司 | Method for processing video frequency and device |
CN110324721A (en) * | 2019-08-05 | 2019-10-11 | 腾讯科技(深圳)有限公司 | A kind of video data handling procedure, device and storage medium |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003230042A (en) * | 2002-01-31 | 2003-08-15 | Matsushita Electric Works Ltd | Imaging method, imaging apparatus, and eye movement measurement apparatus |
JP5566133B2 (en) * | 2010-03-05 | 2014-08-06 | キヤノン株式会社 | Frame rate conversion processor |
CN102348115B (en) * | 2010-08-02 | 2014-04-16 | 南京壹进制信息技术有限公司 | Method and device for removing redundant images from video |
CN105141943B (en) * | 2015-09-08 | 2017-11-03 | 深圳Tcl数字技术有限公司 | The adjusting method and device of video frame rate |
CN105791766A (en) * | 2016-03-09 | 2016-07-20 | 京东方科技集团股份有限公司 | Monitoring method and monitoring device |
WO2019218147A1 (en) * | 2018-05-15 | 2019-11-21 | 深圳市锐明技术股份有限公司 | Method, apparatus and device for transmitting surveillance video |
CN110062212A (en) * | 2019-05-22 | 2019-07-26 | 广东慧讯智慧科技有限公司 | Video monitoring method, device, system, equipment and computer readable storage medium |
CN110166758B (en) * | 2019-06-24 | 2021-08-13 | 京东方科技集团股份有限公司 | Image processing method, image processing device, terminal equipment and storage medium |
-
2019
- 2019-10-18 CN CN201910994382.7A patent/CN112689158B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106559631A (en) * | 2015-09-30 | 2017-04-05 | 小米科技有限责任公司 | Method for processing video frequency and device |
CN110324721A (en) * | 2019-08-05 | 2019-10-11 | 腾讯科技(深圳)有限公司 | A kind of video data handling procedure, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112689158A (en) | 2021-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20140032735A1 (en) | Adaptive rate of screen capture in screen sharing | |
CN110392306B (en) | Data processing method and equipment | |
US20230082784A1 (en) | Point cloud encoding and decoding method and apparatus, computer-readable medium, and electronic device | |
CN111093094A (en) | Video transcoding method, device and system, electronic equipment and readable storage medium | |
CN115209189B (en) | Video stream transmission method, system, server and storage medium | |
CN105469381B (en) | Information processing method and terminal | |
WO2015120766A1 (en) | Video optimisation system and method | |
CN111131817A (en) | Screen sharing method, device, storage medium and screen sharing system | |
CN111385484B (en) | Information processing method and device | |
CN110290398B (en) | Video issuing method and device, storage medium and electronic equipment | |
US10404606B2 (en) | Method and apparatus for acquiring video bitstream | |
WO2023131076A2 (en) | Video processing method, apparatus and system | |
CN112218034A (en) | Video processing method, system, terminal and storage medium | |
CN110809166A (en) | Video data processing method and device and electronic equipment | |
CN112689158B (en) | Method, device, apparatus and computer-readable medium for processing video | |
CN110912948A (en) | Method and device for reporting problems | |
CN110753243A (en) | Image processing method, image processing server and image processing system | |
CN112312200A (en) | Video cover generation method and device and electronic equipment | |
EP4117294A1 (en) | Method and device for adjusting bit rate during live streaming | |
CN108347451B (en) | Picture processing system, method and device | |
CN116980662A (en) | Streaming media playing method, streaming media playing device, electronic equipment, storage medium and program product | |
US11334979B2 (en) | System and method to detect macroblocking in images | |
EP2884742B1 (en) | Process for increasing the resolution and the visual quality of video streams exchanged between users of a video conference service | |
CN114219809A (en) | Image processing method, image processing device, electronic equipment and computer readable medium | |
CN111800649A (en) | Method and device for storing video and method and device for generating video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |