CN109587513B - Method and system for reading audio and video files - Google Patents
Method and system for reading audio and video files Download PDFInfo
- Publication number
- CN109587513B CN109587513B CN201811290246.1A CN201811290246A CN109587513B CN 109587513 B CN109587513 B CN 109587513B CN 201811290246 A CN201811290246 A CN 201811290246A CN 109587513 B CN109587513 B CN 109587513B
- Authority
- CN
- China
- Prior art keywords
- video
- audio
- node server
- reading
- video file
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/232—Content retrieval operation locally within server, e.g. reading video streams from disk arrays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
- H04N21/23805—Controlling the feeding rate to the network, e.g. by controlling the video pump
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Databases & Information Systems (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The embodiment of the invention provides a method and a system for reading an audio/video file, wherein the method comprises the following steps: the second video network node server extracts audio and video data from the audio and video data packet in real time and stores the audio and video data as an audio and video file; the second video network node server receives a reading instruction from the first video network node server, wherein the reading instruction comprises identification information and speed reading information of an audio/video file to be read; the second video network node server determines a target audio and video file according to the identification information of the audio and video file to be read, and reads the target audio and video file according to the speed reading information; and the second video network node server transmits the target audio and video file to the first video network node server, and the first video network node server is used for displaying the target audio and video file. The embodiment of the invention realizes the function of browsing historical audio and video data, improves the reading flexibility of audio and video files and optimizes the use experience of users.
Description
Technical Field
The invention relates to the technical field of video networking, in particular to a method and a system for reading an audio and video file.
Background
The video network is a special network for transmitting high-definition video and a special protocol at high speed based on Ethernet hardware, is a higher-level form of the Internet and is a real-time network.
The video networking monitoring sharing server is also called a sharing platform and is mainly used for sharing audio and video data of a monitoring terminal in the video networking to the monitoring platform. However, the audio and video data of the monitoring terminal are real-time data, and have real-time performance and irreversibility. The monitoring platform can only display real-time audio and video data, historical audio and video data cannot be played back, and the utilization rate of the audio and video data of the monitoring terminal is low.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a method for reading an audio/video file and a corresponding system for reading an audio/video file, which overcome or at least partially solve the above problems.
In order to solve the above problems, an embodiment of the present invention discloses a method for reading an audio/video file, where the method is applied to a video network, the video network includes a first video network node server and a second video network node server, the first video network node server communicates with the second video network node server, the first video network node server is used to access a monitoring terminal in the video network and obtain audio/video data of the monitoring terminal, and the second video network node server is used to convert the audio/video data from the first video network node server into an audio/video data packet supporting a real-time transmission protocol, and the method includes: the second video networking node server extracts the audio and video data from the audio and video data packet in real time and stores the audio and video data extracted in real time as an audio and video file; the second video network node server receives a reading instruction from the first video network node server, wherein the reading instruction comprises identification information and speed reading information of an audio/video file to be read; the second video network node server determines a target audio and video file according to the identification information of the audio and video file to be read, and reads the target audio and video file according to the speed reading information; and the second video network node server transmits the read target audio and video file to the first video network node server, and the first video network node server is used for displaying the target audio and video file.
Optionally, the reading, by the second video networking node server, the target audio/video file according to the multiple speed reading information includes: the second video network node server estimates frame time interval information of the target audio/video file according to timestamp information of each frame program stream packet header of the target audio/video file; the second video network node server divides the frame time interval information by the multiple speed reading information to obtain multiple speed reading time interval information for reading each audio/video frame of the target audio/video file; and the second video network node server reads the target audio and video file according to the speed reading time interval information.
Optionally, after the second node server of the video network determines a target audio/video file according to the identification information of the audio/video file to be read, and reads the target audio/video file according to the speed doubling reading information, the method further includes: the second video network node server receives a single-frame playing instruction from the first video network node server, and suspends the operation of reading the target audio/video file according to the single-frame playing instruction; and the second video network node server determines each single frame data of the target audio/video file according to the whole frame identification information of each frame program stream header of the target audio/video file, and plays each determined single frame data in sequence.
Optionally, after the second video networking node server determines each piece of single-frame data of the target audio/video file according to the entire-frame identification information of the header of each frame of program stream of the target audio/video file, and sequentially plays each piece of single-frame data obtained by determining, the method further includes: the second video network node server receives a recovery instruction from the first video network node server, and adjusts the reading time of the next audio/video frame of the audio/video frame corresponding to the operation of suspending reading the target audio/video file to the current time according to the recovery instruction; and the second video network node server plays the target audio and video file from the next audio and video frame at the current time.
Optionally, before the second node server of the video network determines the target audio/video file according to the identification information of the audio/video file to be read, the method further includes: the second video network node server receives a search instruction from the first video network node server, wherein the search instruction comprises one or more of search starting time information, search ending time information and search position information; and the second video network node server searches the audio and video files according to the search instruction to obtain the target audio and video files.
The embodiment of the invention also discloses a system for reading the audio and video files, which is applied to the video network, wherein the video network comprises a first video network node server and a second video network node server, the first video network node server is communicated with the second video network node server, the first video network node server is used for accessing the monitoring terminal in the video network and acquiring the audio and video data of the monitoring terminal, the second video network node server is used for converting the audio and video data from the first video network node server into an audio and video data packet supporting a real-time transmission protocol, and the second video network node server comprises: the extraction and storage module is used for extracting the audio and video data from the audio and video data packet in real time and storing the audio and video data extracted in real time as an audio and video file; the instruction receiving module is used for receiving a reading instruction from the first video network node server, wherein the reading instruction comprises identification information and speed reading information of an audio/video file to be read; the determining and reading module is used for determining a target audio and video file according to the identification information of the audio and video file to be read and reading the target audio and video file according to the speed reading information; and the file transmission module is used for transmitting the read target audio and video file to the first video network node server, and the first video network node server is used for displaying the target audio and video file.
Optionally, the determining and reading module includes: the frame interval estimation module is used for estimating and obtaining the frame time interval information of the target audio/video file according to the timestamp information of each frame program stream packet head of the target audio/video file; the reading interval determining module is used for dividing the frame time interval information by the speed reading information to obtain speed reading time interval information used for reading each audio/video frame of the target audio/video file; and the speed multiplying reading module is used for reading the target audio and video file according to the speed multiplying reading time interval information.
Optionally, the instruction receiving module is further configured to receive a single frame playing instruction from the first internet of things node server; the second video networking node server further comprises: the pause reading module is used for determining a target audio and video file according to the identification information of the audio and video file to be read and pausing the operation of reading the target audio and video file according to the single-frame playing instruction after reading the target audio and video file according to the multiple-speed reading information; and the single-frame playing module is used for determining each single-frame data of the target audio/video file according to the whole-frame identification information of each frame program stream packet header of the target audio/video file and playing each determined single-frame data in sequence.
Optionally, the instruction receiving module is further configured to receive a recovery instruction from the first node server of the video network; the second video networking node server further comprises: the time adjusting module is used for adjusting the reading time of the next audio/video frame of the audio/video frame corresponding to the operation of suspending reading the target audio/video file to the current time according to the recovery instruction after the single-frame playing module determines each single-frame data of the target audio/video file according to the whole-frame identification information of each frame program stream packet header of the target audio/video file and sequentially plays each determined single-frame data; and the resuming playing module is used for playing the target audio and video file from the next audio and video frame at the current time.
Optionally, the instruction receiving module is further configured to receive a search instruction from the first node server of the video network, where the search instruction includes one or more of search start time information, search end time information, and search location information; and the determining and reading module is further used for searching the audio and video files according to the searching instruction to obtain the target audio and video files.
The embodiment of the invention has the following advantages:
the embodiment of the invention is applied to the video network, and the video network can comprise a first video network node server and a second video network node server, wherein the first video network node server is communicated with the second video network node server, the first video network node server is used for accessing a monitoring terminal in the video network and acquiring audio and video data of the monitoring terminal, and the second video network node server is used for converting the audio and video data from the first video network node server into an audio and video data packet supporting a real-time transmission protocol.
In the embodiment of the invention, the second video network node server extracts the audio and video data from the converted audio and video data packet in real time and stores the audio and video data extracted in real time as the audio and video file. The second video network node server may receive a reading instruction from the first video network node server, where the reading instruction is used to read a target audio/video file from an audio/video file already stored in the second video network node server, and the reading instruction may specifically include identification information of the target audio/video file. And the second video network node server determines a target audio/video file from the stored audio/video files according to the identification information. The read instruction may specifically include double-speed read information. After determining the target audio/video file, the second video network node server reads the target audio/video file according to the speed reading information, and transmits the read target audio/video file to the first video network node server so as to display the target audio/video file on the first video network node server.
By applying the characteristics of the video network, on one hand, the second video network node server can store audio and video data, namely real-time audio and video data of the monitoring terminal, and the function of browsing historical audio and video data is realized. On the other hand, the second video network node server can read the audio and video files at double speed, so that the reading flexibility of the audio and video files is improved, and the use experience of a user is optimized.
Drawings
FIG. 1 is a schematic networking diagram of a video network of the present invention;
FIG. 2 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 3 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention;
fig. 5 is a flowchart illustrating steps of an embodiment of a method for reading an audio/video file according to the present invention;
FIG. 6 is a schematic design diagram of a method for reading video data by a sharing platform according to the present invention;
fig. 7 is a block diagram of a second video network node server in an embodiment of a system for reading an audio/video file according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network Circuit Switching (Circuit Switching), the internet of vision technology employs network Packet Switching to satisfy the demand of Streaming (translated into Streaming, and continuous broadcasting, which is a data transmission technology, converting received data into a stable and continuous stream, and continuously transmitting the stream, so that the sound heard by the user or the image seen by the user is very smooth, and the user can start browsing on the screen before the whole data is transmitted). The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 1, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (circled part), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 2, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204.
The network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 3, the network interface module (downstream network interface module 301, upstream network interface module 302), the switching engine module 303, and the CPU module 304 are mainly included.
Wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the data packet coming from the CPU module 204 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) and obtaining the token generated by the code rate control module.
If the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 208 is configured by the CPU module 204, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway:
As shown in fig. 4, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the video networking destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MAC SA of the ethernet coordination gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 3 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
The data packets of the access network as shown in the following table mainly include the following parts:
DA | SA | Reserved | Payload | CRC |
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (e.g. various protocol packets, multicast data packets, unicast data packets, etc.), there are at most 256 possibilities, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses.
The Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA).
The reserved byte consists of 2 bytes.
The payload part has different lengths according to types of different datagrams, and is 64 bytes if the type of the datagram is a variety of protocol packets, or is 1056 bytes if the type of the datagram is a unicast packet, but is not limited to the above 2 types.
The CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of a Label of Multi-Protocol Label switching (MPLS), and assuming that there are two connections between a device a and a device B, there are 2 labels for a packet from the device a to the device B, and 2 labels for a packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA | SA | Reserved | label (R) | Payload | CRC |
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
Based on the characteristics of the video network, one of the core concepts of the embodiment of the invention is provided, the second video network node server extracts the audio and video data of the monitoring terminal from the audio and video data packet in real time according to a protocol of the video network, then stores the audio and video data extracted in real time into an audio and video file, determines a target audio and video file from the stored audio and video files according to a reading instruction of the first video network node server, reads the target audio and video file at double speed, and then transmits the target audio and video file read at double speed to the first video network node server for displaying.
Referring to fig. 5, a flowchart illustrating steps of an embodiment of a method for reading an audio/video file according to the present invention is shown, where the method may be applied to a video network, and the video network may include a first video network node server and a second video network node server, where the first video network node server communicates with the second video network node server, the first video network node server is used to access a monitoring terminal in the video network and acquire audio/video data of the monitoring terminal, and the second video network node server is used to convert the audio/video data from the first video network node server into an audio/video data packet supporting a real-time transmission protocol, where the method specifically includes the following steps:
and step 501, the second video network node server extracts the audio and video data from the audio and video data packet in real time and stores the audio and video data extracted in real time as an audio and video file.
In the embodiment of the invention, the second video network node server can be a video network monitoring sharing server, and the second video network node server extracts the audio and video data from the audio and video data packet supporting the real-time transmission protocol in real time. The audio/video data packet may be composed of a packet header and packet data, the packet header may include number information, length information, and the like of the audio/video data packet, and the packet data may include specific audio/video data. Therefore, when the second video network node server extracts the audio and video data from the audio and video data packet in real time, the packet header of the audio and video data packet can be deleted to obtain packet data, and specific audio and video data can be obtained. And the second video network node server extracts the obtained audio and video data from the audio and video data packet in real time, namely the real-time audio and video data of the monitoring terminal.
And the second video networking node server stores the audio and video data extracted in real time to a preset position of a magnetic disc to obtain an audio and video file. The disk may be any one of a plurality of preset storage media, and the embodiment of the present invention does not specifically limit the capacity, material, brand, model, and the like of the disk. The preset position may be any position in the disk, for example, under a certain drive, under a certain folder, and the like. The audio/video file may be the audio/video data itself, or may be a file obtained by converting the audio/video data, for example, an audio/video file is obtained by converting the format of the audio/video data, an audio/video file is obtained by compressing the audio/video data, and the like.
At step 502, the second node server receives a read instruction from the first node server.
In the embodiment of the invention, the first node server of the video networking can be a back-end server in a video networking monitoring, networking, managing and scheduling platform. The first video network node server can respond to the trigger operation of an operator and generate a corresponding reading instruction. The reading instruction at least may include identification information of an audio/video file to be read, or identification information of a monitoring terminal corresponding to the audio/video file to be read. The identification information of the audio and video file can be the unique number of the audio and video file, the identification information of the monitoring terminal can be the unique number of the monitoring terminal, and the monitoring terminal has a corresponding relation with the audio and video file obtained by storing and processing the acquired audio and video data.
In the embodiment of the present invention, the reading instruction may include not only identification information (identification information of an audio/video file to be read or identification information of a source monitoring terminal of the audio/video file to be read), but also double-speed reading information. The identification information is used for determining a unique target audio/video file, and the multiple speed reading information is used for determining the reading speed of reading the target audio/video file.
In a preferred embodiment of the present invention, the process of the second node server of the video network reading the target audio/video file according to the double speed reading information may include the following sub-steps:
and a substep 101, estimating by the second video network node server according to the timestamp information of each frame program stream header of the target audio/video file to obtain frame time interval information of the target audio/video file.
The second video network node server can obtain the frame rate (the number of frames contained in 1 second) of the target audio and video file in an estimation mode. In practical application, each frame of program stream packet header of the target audio/video file contains time stamp information, the second video networking node server can calculate the number of frames in one second, and further estimate the frame rate, and the frame rate can be more accurate along with the longer and longer estimated time. The reciprocal of the frame rate is the frame interval information.
And a substep 102, dividing the frame time interval information by the double-speed reading information by the second video network node server to obtain the double-speed reading time interval information for reading each audio/video frame of the target audio/video file.
For example, the frame rate is 25 (25 frames in 1 second), and the frame interval information is 1 second divided by 25 to equal 40 milliseconds. If the double speed read information is 4 times, the double speed read interval information is 40 msec divided by 4 to be equal to 10 msec, that is, after one frame is read, it is necessary to wait for 10 msec before reading the next frame.
And a substep 103, reading the target audio/video file by the second video networking node server according to the double-speed reading time interval information.
As an example, if the target audio/video file is read at the normal speed, that is, the speed-doubled read information is 1 time, the normal speed read time interval information is 40 milliseconds divided by 1 and equals to 40 milliseconds, that is, after one frame is read, it is necessary to wait for 40 milliseconds before reading the next frame. Therefore, the second video network node server utilizes the double-speed reading information in the reading instruction to accelerate the reading speed of the target audio/video file.
And step 504, the second video network node server transmits the read target audio and video file to the first video network node server so as to display the target audio and video file on the first video network node server.
In the embodiment of the invention, the second video network node server can package the read target audio/video file into the real-time transmission protocol data packet and transmit the real-time transmission protocol data packet to the first video network node server.
In a preferred embodiment of the present invention, during the process of reading the target audio/video file by the second video network node server according to the double-speed reading information, or after the target audio/video file is read by the second video network node server according to the double-speed reading information, the second video network node server may receive the single-frame play instruction from the first video network node server. The single-frame playing instruction is mainly used for playing the target audio and video files frame by frame. It should be noted that the precondition for playing the target audio/video file frame by frame is to suspend reading the target audio/video file, and therefore, after receiving the single-frame playing instruction, the second video network node server suspends the operation of reading the target audio/video file. Then, the second video network node server needs to further determine each single frame data of the target audio/video file, in practical application, the second video network node server can determine each single frame data of the target audio/video file according to the whole frame identification information of each frame program stream packet header of the target audio/video file, and sequentially play each determined single frame data. For example, the second video network node server determines whether the start data of each frame of program stream header of the target audio/video file is 0x000001BA (identification information of the whole frame), and if the start data of both the two program stream headers is 0x000001BA, the data between the two program stream headers is a single frame of data.
In a preferred embodiment of the present invention, after the second node server of the video network suspends the operation of reading the target audio/video file, the operation of reading the target audio/video file may be resumed. Specifically, the second node server of the video network may receive the recovery instruction from the first node server of the video network, and then recover and read the target audio/video file according to the recovery instruction. In practical application, under the condition of normally reading a target audio/video file, the time for reading the next frame (each accumulation) is compared with the current time, and if the time for reading the next frame is greater than or equal to the current time, the next frame is directly read; and if the time for reading the next frame is less than the current time, the operation of reading the next frame is not executed. Under the condition of suspending reading the target audio and video file, the time for reading the next frame is kept unchanged, the current time is increased all the time, if the target audio and video file is resumed to be read, the time for reading the next frame may be much shorter than the current time, at this moment, the time for reading the next frame needs to be adjusted to the current time, and then the next frame is directly read. Therefore, after receiving the resume instruction, the second video network node server needs to adjust the reading time of the next audio/video frame of the audio/video frame corresponding to the operation of suspending reading of the target audio/video file to the current time, and then continues to play the target audio/video file from the next audio/video frame at the current time.
In a preferred embodiment of the present invention, the second node server of the video network may determine the target audio/video file according to the identification information of the audio/video file to be read, and may also receive a search instruction from the first node server of the video network, where the search instruction includes one or more of search start time information, search end time information, and search location information, and further search the stored audio/video file according to the search instruction to obtain the target audio/video file. For the saved audio-video files, the attribute information of each audio-video file, such as the saving time, the saving position and the like, can be set in the saving process. Specifically, the second node server of the video network may search the stored audio/video files according to the search start time information, the search end time information, and the storage time to obtain the target audio/video file, and the second node server of the video network may search the stored audio/video files according to the search position information and the storage position to obtain the target audio/video file. The embodiment of the present invention does not specifically limit the technical means used by the second node server of the video network to search for the target audio/video file.
Based on the above description about the embodiment of the method for reading the audio/video file, a method for reading the video data by the sharing platform is introduced below, as shown in fig. 6, the video networking monitoring and networking management and scheduling platform is divided into a front-end display client (web page) and a back-end server, and the front-end display client is responsible for displaying the whole monitoring directory of the monitoring terminal, calling the video data of the monitoring terminal, configuring the video networking monitoring and networking management and scheduling platform, and the like. The back-end server is responsible for the unified management of the monitoring terminals accessed in the whole video network and the docking service of a national standard (GB/T28181) platform monitoring system. The video networking monitoring sharing server is also called a sharing platform, and can be understood as a gateway, and is responsible for converting video data of the video networking monitoring terminals into video data packets of a real-time transmission protocol, so as to share the video data of the video networking to the monitoring platform based on the real-time transmission protocol. The back-end server of the video networking monitoring and managing and dispatching platform sends the video data packets supporting the real-time transmission protocol to the sharing platform, the sharing platform removes the packet headers of the received video data packets of the real-time transmission protocol to obtain the video data of the monitoring terminal, and the video data is stored as a video file. The sharing platform can provide a video file return visit function for the back-end server and support the back-end server to search the video file of the specified camera. The back-end server can read and download the video file to the local at double speed when reading the video file, and also supports the functions of dragging to a certain time point for playing, pausing, resuming, single frame playback and the like.
The embodiment of the invention is applied to the video network, and the video network can comprise a first video network node server and a second video network node server, wherein the first video network node server is communicated with the second video network node server, the first video network node server is used for accessing a monitoring terminal in the video network and acquiring audio and video data of the monitoring terminal, and the second video network node server is used for converting the audio and video data from the first video network node server into an audio and video data packet supporting a real-time transmission protocol.
In the embodiment of the invention, the second video network node server extracts the audio and video data from the converted audio and video data packet in real time and stores the audio and video data extracted in real time as the audio and video file. The second video network node server may receive a reading instruction from the first video network node server, where the reading instruction is used to read a target audio/video file from an audio/video file already stored in the second video network node server, and the reading instruction may specifically include identification information of the target audio/video file. And the second video network node server determines a target audio/video file from the stored audio/video files according to the identification information. The read instruction may specifically include double-speed read information. After determining the target audio/video file, the second video network node server reads the target audio/video file according to the speed reading information, and transmits the read target audio/video file to the first video network node server so as to display the target audio/video file on the first video network node server.
By applying the characteristics of the video network, on one hand, the second video network node server can store audio and video data, namely real-time audio and video data of the monitoring terminal, and the function of browsing historical audio and video data is realized. On the other hand, the second video network node server can read the audio and video files at double speed, so that the reading flexibility of the audio and video files is improved, and the use experience of a user is optimized.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 7, a block diagram of a second node server of the video network in an embodiment of a system for reading an audio/video file according to the present invention is shown, where the system may be applied to a video network, and the video network may include a first node server of the video network and a second node server of the video network, where the first node server of the video network communicates with the second node server of the video network, the first node server of the video network is used to access a monitoring terminal in the video network and acquire audio/video data of the monitoring terminal, and the second node server of the video network is used to convert the audio/video data from the first node server of the video network into an audio/video data packet supporting a real-time transmission protocol, and the second node server of the video network in the system may specifically include the following modules:
the extracting and storing module 701 is used for extracting the audio and video data from the audio and video data packet in real time and storing the audio and video data extracted in real time as an audio and video file.
The instruction receiving module 702 is configured to receive a reading instruction from the first video network node server, where the reading instruction includes identification information and double-speed reading information of an audio/video file to be read.
The determining and reading module 703 is configured to determine a target audio/video file according to the identification information of the audio/video file to be read, and read the target audio/video file according to the multiple speed reading information.
And the file transmission module 704 is configured to transmit the read target audio/video file to a first video network node server, where the first video network node server is configured to display the target audio/video file.
In a preferred embodiment of the present invention, the determining reading module 703 includes: the frame interval estimation module 7031 is configured to estimate, according to timestamp information of a packet header of each frame of a program stream of the target audio/video file, frame time interval information of the target audio/video file; a reading interval determining module 7032, configured to divide the frame time interval information by the multiple speed reading information to obtain multiple speed reading time interval information for reading each audio/video frame of the target audio/video file; and the double-speed reading module 7033 is configured to read the target audio/video file according to the double-speed reading time interval information.
In a preferred embodiment of the present invention, the instruction receiving module 702 is further configured to receive a single frame playing instruction from the first node server of the video network; the second video networking node server further comprises: the pause reading module 705 is configured to, after the determination reading module 703 determines the target audio/video file according to the identification information of the audio/video file to be read, and reads the target audio/video file according to the multiple speed reading information, pause the operation of reading the target audio/video file according to the single frame playing instruction; the single frame playing module 706 is configured to determine each single frame data of the target audio/video file according to the entire frame identification information of each frame program stream header of the target audio/video file, and sequentially play each determined single frame data.
In a preferred embodiment of the present invention, the instruction receiving module 702 is further configured to receive a recovery instruction from the first node server; the second video networking node server further comprises: a time adjusting module 707, configured to, after the single frame playing module 706 determines, according to the entire frame identification information of each frame program stream header of the target audio/video file, each single frame data of the target audio/video file, and sequentially plays each determined single frame data, adjust, according to a resume instruction, a reading time of a next audio/video frame of an audio/video frame corresponding to an operation of suspending reading of the target audio/video file to a current time; and a resume playing module 708, configured to play the target audio/video file from the next audio/video frame at the current time.
In a preferred embodiment of the present invention, the instruction receiving module 702 is further configured to receive a search instruction from the first node server of the internet of view, where the search instruction includes one or more of search start time information, search end time information, and search location information; the determining and reading module 703 is further configured to search the audio/video file according to the search instruction to obtain a target audio/video file.
For the system embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method for reading the audio/video file and the system for reading the audio/video file provided by the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (10)
1. The method is characterized in that the method is applied to a video network, the video network comprises a first video network node server and a second video network node server, the first video network node server is communicated with the second video network node server, the first video network node server is used for accessing a monitoring terminal in the video network and acquiring audio and video data of the monitoring terminal, the second video network node server is used for converting the audio and video data from the first video network node server into an audio and video data packet supporting a real-time transmission protocol, the first video network node server is a rear-end server in a video network monitoring and networking management scheduling platform, the second video network node server is a video network monitoring and sharing server, and the rear-end server in the video network monitoring and networking management scheduling platform is used for being responsible for the unification of the monitoring terminal accessed in the whole video network The video network monitoring sharing server is used for converting video data of monitoring terminals in the video network into video data packets of a real-time transmission protocol and sharing the video data of the video network to a monitoring platform based on the real-time transmission protocol, and the method comprises the following steps:
the second video networking node server extracts the audio and video data from the audio and video data packet in real time and stores the audio and video data extracted in real time as an audio and video file;
the second video network node server receives a reading instruction from the first video network node server, wherein the reading instruction comprises identification information and speed reading information of an audio/video file to be read, and the speed reading information is used for determining the reading speed of a read target audio/video file;
the second video network node server determines a target audio and video file according to the identification information of the audio and video file to be read, and reads the target audio and video file according to the speed reading information;
the second video network node server transmits the read target audio and video file to the first video network node server, and the first video network node server is used for displaying the target audio and video file;
the method further comprises the following steps:
and in the process or after the second video network node server reads the target audio and video file according to the double-speed reading information, the second video network node server suspends the reading of the target audio and video file and executes the single-frame playing operation according to the single-frame playing instruction of the first video network node server, or resumes the reading operation of the target audio and video file from the corresponding position when the target audio and video file is suspended and read according to the resuming instruction of the first video network node server.
2. The method for reading the audio/video file according to claim 1, wherein the reading of the target audio/video file by the second video networking node server according to the double speed reading information comprises:
the second video network node server estimates frame time interval information of the target audio/video file according to timestamp information of each frame program stream packet header of the target audio/video file;
the second video network node server divides the frame time interval information by the multiple speed reading information to obtain multiple speed reading time interval information for reading each audio/video frame of the target audio/video file;
and the second video network node server reads the target audio and video file according to the speed reading time interval information.
3. The method for reading the audio/video file according to claim 1, wherein after the second video network node server determines a target audio/video file according to the identification information of the audio/video file to be read, and reads the target audio/video file according to the speed doubling reading information, the method further comprises:
the second video network node server receives a single-frame playing instruction from the first video network node server, and suspends the operation of reading the target audio/video file according to the single-frame playing instruction;
and the second video network node server determines each single frame data of the target audio/video file according to the whole frame identification information of each frame program stream header of the target audio/video file, and plays each determined single frame data in sequence.
4. The method for reading an audio/video file according to claim 3, wherein after the second video network node server determines each single frame of data of the target audio/video file according to the entire frame identification information of the packet header of each frame of the program stream of the target audio/video file, and sequentially plays each single frame of data obtained by the determination, the method further comprises:
the second video network node server receives a recovery instruction from the first video network node server, and adjusts the reading time of the next audio/video frame of the audio/video frame corresponding to the operation of suspending reading the target audio/video file to the current time according to the recovery instruction;
and the second video network node server plays the target audio and video file from the next audio and video frame at the current time.
5. The method for reading the audio/video file according to any one of claims 1 to 4, wherein before the second node server of the video network determines the target audio/video file according to the identification information of the audio/video file to be read, the method further comprises:
the second video network node server receives a search instruction from the first video network node server, wherein the search instruction comprises one or more of search starting time information, search ending time information and search position information;
and the second video network node server searches the audio and video files according to the search instruction to obtain the target audio and video files.
6. The utility model provides a system of reading of audio frequency and video file, a serial communication port, the system is applied to in the video networking, the video networking includes first video networking node server and second video networking node server, first video networking node server with second video networking node server communicates, first video networking node server is used for inserting monitor terminal in the video networking, and acquire monitor terminal's audio frequency and video data, second video networking node server is used for coming from the audio frequency and video data conversion of first video networking node server is the audio frequency and video data package that supports real-time transport protocol, first video networking node server is the rear end server in the video networking control management dispatch platform, second video networking node server is video networking control shared server, the rear end server in the video networking control dispatch platform is used for being responsible for the system of the monitor terminal who inserts in the whole video networking The second node server of the video network is used for converting the video data of the monitoring terminal in the video network into a video data packet of a real-time transmission protocol and sharing the video data of the video network to a monitoring platform based on the real-time transmission protocol, and comprises:
the extraction and storage module is used for extracting the audio and video data from the audio and video data packet in real time and storing the audio and video data extracted in real time as an audio and video file;
the instruction receiving module is used for receiving a reading instruction from the first video network node server, wherein the reading instruction comprises identification information and speed reading information of an audio/video file to be read, and the speed reading information is used for determining the reading speed of a read target audio/video file;
the determining and reading module is used for determining a target audio and video file according to the identification information of the audio and video file to be read and reading the target audio and video file according to the speed reading information;
the file transmission module is used for transmitting the read target audio and video file to the first video network node server, and the first video network node server is used for displaying the target audio and video file;
the second video networking node server is further configured to:
and in the process or after the second video network node server reads the target audio and video file according to the double-speed reading information, the second video network node server suspends the reading of the target audio and video file and executes the single-frame playing operation according to the single-frame playing instruction of the first video network node server, or resumes the reading operation of the target audio and video file from the corresponding position when the target audio and video file is suspended and read according to the resuming instruction of the first video network node server.
7. The system for reading audio/video files according to claim 6, wherein the determining and reading module comprises:
the frame interval estimation module is used for estimating and obtaining the frame time interval information of the target audio/video file according to the timestamp information of each frame program stream packet head of the target audio/video file;
the reading interval determining module is used for dividing the frame time interval information by the speed reading information to obtain speed reading time interval information used for reading each audio/video frame of the target audio/video file;
and the speed multiplying reading module is used for reading the target audio and video file according to the speed multiplying reading time interval information.
8. The system for reading an audio/video file according to claim 6, wherein the instruction receiving module is further configured to receive a single frame playing instruction from the first node server of the video network;
the second video networking node server further comprises:
the pause reading module is used for determining a target audio and video file according to the identification information of the audio and video file to be read and pausing the operation of reading the target audio and video file according to the single-frame playing instruction after reading the target audio and video file according to the multiple-speed reading information;
and the single-frame playing module is used for determining each single-frame data of the target audio/video file according to the whole-frame identification information of each frame program stream packet header of the target audio/video file and playing each determined single-frame data in sequence.
9. The system for reading an audio-video file according to claim 8, wherein the instruction receiving module is further configured to receive a recovery instruction from the first node server of the video network;
the second video networking node server further comprises:
the time adjusting module is used for adjusting the reading time of the next audio/video frame of the audio/video frame corresponding to the operation of suspending reading the target audio/video file to the current time according to the recovery instruction after the single-frame playing module determines each single-frame data of the target audio/video file according to the whole-frame identification information of each frame program stream packet header of the target audio/video file and sequentially plays each determined single-frame data;
and the resuming playing module is used for playing the target audio and video file from the next audio and video frame at the current time.
10. The system for reading audio/video files according to any one of claims 6 to 9, wherein the instruction receiving module is further configured to receive a search instruction from the first node server of the video network, where the search instruction includes one or more of search start time information, search end time information, and search location information;
and the determining and reading module is further used for searching the audio and video files according to the searching instruction to obtain the target audio and video files.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811290246.1A CN109587513B (en) | 2018-10-31 | 2018-10-31 | Method and system for reading audio and video files |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811290246.1A CN109587513B (en) | 2018-10-31 | 2018-10-31 | Method and system for reading audio and video files |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109587513A CN109587513A (en) | 2019-04-05 |
CN109587513B true CN109587513B (en) | 2021-02-09 |
Family
ID=65921062
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811290246.1A Active CN109587513B (en) | 2018-10-31 | 2018-10-31 | Method and system for reading audio and video files |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109587513B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104394380A (en) * | 2014-12-09 | 2015-03-04 | 浙江省公众信息产业有限公司 | Video monitoring management system and playback method of video monitoring record |
EP3196859A2 (en) * | 2016-01-19 | 2017-07-26 | Honeywell International Inc. | Traffic visualization system |
CN108243153A (en) * | 2016-12-23 | 2018-07-03 | 北京视联动力国际信息技术有限公司 | A kind of method and apparatus in the broadcasting TV programme in networking |
-
2018
- 2018-10-31 CN CN201811290246.1A patent/CN109587513B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104394380A (en) * | 2014-12-09 | 2015-03-04 | 浙江省公众信息产业有限公司 | Video monitoring management system and playback method of video monitoring record |
EP3196859A2 (en) * | 2016-01-19 | 2017-07-26 | Honeywell International Inc. | Traffic visualization system |
CN108243153A (en) * | 2016-12-23 | 2018-07-03 | 北京视联动力国际信息技术有限公司 | A kind of method and apparatus in the broadcasting TV programme in networking |
Also Published As
Publication number | Publication date |
---|---|
CN109587513A (en) | 2019-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110121059B (en) | Monitoring video processing method, device and storage medium | |
CN109547728B (en) | Recorded broadcast source conference entering and conference recorded broadcast method and system | |
CN109617956B (en) | Data processing method and device | |
CN110190973B (en) | Online state detection method and device | |
CN108881948B (en) | Method and system for video inspection network polling monitoring video | |
CN110572607A (en) | Video conference method, system and device and storage medium | |
CN109743550B (en) | Method and device for monitoring data flow regulation | |
CN109246135B (en) | Method and system for acquiring streaming media data | |
CN110475131B (en) | Terminal connection method, server and terminal | |
CN109743284B (en) | Video processing method and system based on video network | |
CN110113555B (en) | Video conference processing method and system based on video networking | |
CN110446058B (en) | Video acquisition method, system, device and computer readable storage medium | |
CN110289974B (en) | Data stream processing method, system and device and storage medium | |
CN110022500B (en) | A kind of packet loss processing method and device | |
CN110392227B (en) | Data processing method, device and storage medium | |
CN110062259B (en) | Video acquisition method, system, device and computer readable storage medium | |
CN110134892B (en) | Loading method and system of monitoring resource list | |
CN109963107B (en) | Audio and video data display method and system | |
CN109698953B (en) | State detection method and system for video network monitoring equipment | |
CN109859824B (en) | Pathological image remote display method and device | |
CN111212255A (en) | Monitoring resource obtaining method and device and computer readable storage medium | |
CN110636132A (en) | Data synchronization method, client, electronic device and computer-readable storage medium | |
CN110557411A (en) | video stream processing method and device based on video network | |
CN109587513B (en) | Method and system for reading audio and video files | |
CN110691214B (en) | Data processing method and device for business object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: 33rd Floor, No.1 Huasheng Road, Yuzhong District, Chongqing 400013 Patentee after: VISIONVERA INFORMATION TECHNOLOGY Co.,Ltd. Country or region after: China Address before: 100000 Beijing Dongcheng District Qinglong Hutong 1 Song Hua Building A1103-1113 Patentee before: VISIONVERA INFORMATION TECHNOLOGY Co.,Ltd. Country or region before: China |
|
CP03 | Change of name, title or address |