Background
In the security industry, the distribution and transcoding functions of a traditional streaming media server are separately deployed, and in a traditional scene, the code streams of cameras are unified, and the popular trends are also unified. With the development of the security industry, a plurality of generations of cameras have appeared, the coding types are varied from mpeg2/mpeg4 and H264 to H265, and the coding packaging types required by the terminal playing device are also varied, which brings great docking trouble to platform integrators. For an integrator, integrating the code stream into a unified type output becomes necessary work.
In the prior art, a transcoding server is built, a scheduling module is used to find a server with the best state, and an input stream is transcoded and output to a front-end device for playing or distribution. However, different terminals require different types of code streams, for example, a PC terminal requires a PS type, a web page terminal requires an SRTP type, and at this time, the transcoding server needs to perform repeated transcoding on the same code stream to form different types, which results in wasted server performance. This is because transcoding is a multi-stage process, such as decoding, filtering, encoding, etc., and not only consists of input and output, but the existing scheduling schemes are all comprehensively considered for matching the input and output stages and server performance, and for streams in other stages, streams cannot be utilized or shared among servers, which causes great waste.
The existing scheduling schemes include DNS scheduling based on a flow request, LVS scheduling based on a network layer load, nginx scheduling based on an application layer, and distribution scheduling based on a code stream input/output stage, which are all schemes for balancing server loads or selecting a server with the best performance. Transcoding of code stream is a process of processing one path of stream to generate multi-stage stream, and the sequence is original stream, decapsulation, decoding, filtering, encoding and encapsulation. In the scheduling policy, not only the actual load capacity of the server but also the service correlation between a new request and an existing request, for example, a phase stream referencing the existing request, for example, a decoding phase stream, is considered to generate an encoded code stream required by the new request. A scheduling scheme that comprehensively considers such service correlation and server performance of each phase stream of a code stream is not available at present.
Disclosure of Invention
The invention aims to overcome the defects of the above situation and provide a technical scheme capable of solving the problems, and through the scheduling scheme, each stage stream of one path of stream can be fully utilized, the transcoding efficiency is improved, the server performance is saved, the server characteristic can be better utilized, and a special server can generate the code stream of a specific stage.
A cross-platform scheduling method for the field of intelligent production business of manufacturing industry specifically comprises the following steps:
s1, configuring information of the processing end in the configuration file, including the processing end server IP, the processing end monitoring port, and the server characteristics, such as performance parameters, extranet, GPU running state, etc.;
s2, the strategy module actively connects with the processing terminal through the tcp protocol, sends the check protocol, and obtains the basic information of the processing terminal server;
s3, when a superior notify request comes, the strategy module establishes a tree structure according to the source information of notify and stores the source information; judging the processing flow of the source code stream according to the target information, wherein the processing flow comprises one or more flows of decapsulation, decoding, filtering, coding and encapsulation, and storing the stage flows to a source information tree;
s4, searching one or more servers with best performance in all processing terminals, forwarding the notify request, informing which phase streams need to be generated, and keeping the phase streams completely consistent with the strategy module, wherein the reason for searching one or more servers is that a plurality of servers process different phases and then are integrated and output according to the characteristics of the servers, such as extranets, GPUs and the like, which are optimized;
s5, collecting the response time of the processing end as a judgment condition for selecting the processing end. If the server is overtime, special records are made, and alarm feedback is triggered;
s6, the processing end will detect the original code stream, if it is not matched with the strategy module, it will push the protocol to the strategy module, and the strategy will modify the source information;
s7, the superior notify request comes, according to the source id, searches the source information tree, according to the destination information of notify, searches the quote stage of the request, generates the destination flow, if the destination flow is generated for the first time, the parameter type of the stage flow is formed into the index, stored in the source information tree. If the source information tree is not found according to the source id, repeating the steps S3 to S6;
s8, the superior delete request comes, the strategy module deletes the reference count of the conversation, checks the reference count of each stage flow of the original flow, if all the reference counts are 0, the strategy module deletes the flow tree information structure, and issues the delete request to the processing end, deletes all the stage flows of the flow.
A multi-stage flow scheduling system comprises a superior stage, a strategy end and a processing end, wherein a strategy module is operated to search the processing end, search the processing process of the processing end, update the information of a source resource tree and generate a request to send to the processing end; the processing end is composed of a plurality of servers with video processing.
A multi-stage flow scheduling device comprises a computer hardware device and a network device, wherein the computer hardware device and the network device are used for operating a cross-platform scheduling method and a multi-stage flow scheduling system which are oriented to the field of intelligent production business of manufacturing industry.
The invention has the beneficial effects that: because the existing scheduling scheme performs matching distribution on the original stream and the encapsulated stream, the streams at the stages of decapsulation, decoding, filtering and encoding are not fully utilized, which is a waste. The scheduling scheme makes full use of the code stream of each stage, can realize that other stage streams are introduced to generate the code stream type required by the subsequent stage, or the stage streams are distributed among different servers, and then makes full use of the characteristics of the servers to process specific stage streams, such as GPU servers, and can specifically process decoding and encoding processes. This has the benefit of reducing the effort of re-generation of certain phase streams, conserving server performance, and allowing specific phase streams to be processed by specific servers, increasing efficiency.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Detailed Description
In the embodiment of the invention, the designed scheme is to access the audios and videos of a plurality of subways, and the audios and videos are illustrated according to the videos accessed to one line. The video source has three encoding formats of H264, MPEG4 and MPEG2, and the output encoding type requires H264, so transcoding is required. There are two types of PC end playing and web page playing, and different packaging formats need to be output. The platform deploys 3 processing ports A, B, C, where C is the GPU server, one policy module schedule.
Firstly, configuring the processing end information in a configuration file, wherein the processing end information comprises a processing end IP, a PORT and a GPU mark of a C processing end.
And starting a strategy module, reading the configuration file by the strategy module, and acquiring the IP and the port of the processing end A/B/C. And connecting the processing end, establishing a connection by using each process of the strategy server and the processing end, storing the connections by using the strategy module, sending heartbeat information at regular time, and keeping long connection. The processing end is used for multi-process processing, and establishes a connection with each process, so as to balance the processing capacity of the processes and make the performance of the processes maximally utilized.
The strategy module sends a check protocol to the processing end, which is a private protocol, and the purpose obtains global information such as cpu utilization, cpu core number, memory utilization, network card performance, network uplink/downlink flow and the like of the processing end, and information such as cpu utilization, process connection number and the like of each process. The strategy selects the server with the best performance, and the judgment is made according to the basic information of the servers.
The strategy module is provided with a global red and black tree, and stores the basic information of the source according to the unique id of the video source as an index. The basic information of the source includes corresponding processing ends of an original stream, an access decapsulated stream, a decoded stream, a filter stream, a coded stream and an encapsulated stream, and reference count of each stage stream, and it is ensured in principle that each stream is generated only once, the original stream, the decapsulated stream and the decoded stream can only be of one type, and the filter, the code and the encapsulation can be of multiple types, for example, the code may be H264, H265, etc., therefore, when the stage streams of the filter, the code and the encapsulation are saved, three independent red-black trees need to be generated, and indexes are taken according to corresponding filter parameters, coding parameters and encapsulation parameters. This is the basic data structure of the policy module.
The operation of the strategy module is mainly divided into four stages, and the strategy module is operated successively. The first stage is to search the processing end, the second stage is to search the processing progress of the processing end, the third stage is to update the information of the source resource tree, and the fourth stage is to generate a request to be sent to the processing end.
In the first stage, the processing end is searched. Firstly, searching on a global red and black tree according to a source unique id, then calculating a final outflow stage according to destination information of a notify protocol, wherein a search index is constructed mainly according to the destination information of the notify protocol, a reference stage is searched from a resource tree, a processing end for processing the stage flow is obtained according to the reference stage information, for example, a notify request has filter parameters, the request needs four processes of decoding, filtering, encoding and packaging, the final packaging parameter index is searched in packaging red and black of the resource tree, if the request has the filter parameters, the search is finished, the packaging flow is referenced, otherwise, the coding red and black tree, the filter red and black tree, the decoding stage, the decapsulating stage and the original flow are sequentially searched for the reference stage, and therefore, the minimum reference stage is the original flow. And then judging the current performance of the processing end, if the performance does not meet the condition, sending the stage flow to other servers for processing, and referring the flow which is referred to the other servers as an external reference flow, wherein the condition can be generated only under the condition that the processing end cannot process the flow. The external reference stream also has an application, the access can only be once for the hot video, a large number of requests are input, the strategy module distributes the internal of the stream, and then the stream is sent to different terminals by different processing terminals. If transcoding processing exists in the analyzed notify destination information, a GPU server is preferentially selected for processing, the transcoding processing is accessed by a processing end A, the transcoding processing is internally forwarded to a processing end C (GPU), the stream is processed to a coding stage, then the stream is sent to a server A or a server B, a packaging stream is generated, and the stream is sent to a terminal. The A/B server processes an input/output stage, the high broadband characteristic of the server is utilized to mainly perform distribution processing, the C processing end is responsible for a transcoding function, and the GPU characteristic is utilized to efficiently transcode. If the external network request exists, the external network processing terminal is required to be selected for processing, and the requirement is not met in the project.
Formula for judging server performance:
performance index parameter of processor
I, a rejection condition, wherein 0 represents that the rejection condition is satisfied, and 1 represents that the rejection condition is not satisfied, such as an extranet server;
s (i) weight;
x (i) attribute value, cpu utilization rate, memory utilization rate, etc.;
pre: priority, the priority value is Base, the non-priority value is 0;
base: the Base number is the same order of magnitude, when the Base number is 0-1, the Base number is 1, and when the Base number is 0-100, the Base number is 100;
and in the second stage, on the basis of selecting one or more processing terminals in the first stage, a processing process is selected for the processing terminals. When selecting the process, the cpu utilization rate and the network connection number ratio (the current network connection number is divided by the maximum connection number of a single process) of the process are mainly referred to. The strategy module respectively sets a weight for the two parts, and the concept of the weight is to represent the proportion of the final performance value. For example, the cpu usage is weighted to 70 and the network connection number ratio is 30, and we consider that cpu accounts for 70% of the process selection. The value of the weight is not fixed, and during the use of the project, information is collected and then the appropriate weight is analyzed.
And in the third stage, the information of the source resource tree is updated. From the destination information in notify, it can be calculated which phases the request needs to refer to and which phases will be generated, and the information needs to be saved. If the source is a new source, a red-black tree of the source needs to be created and initialized, and a filter red-black tree, a coding red-black tree and a packaging red-black tree of the source are created at the same time for storing information of the generated stream of each stage of the source. And updating the resource tree information, namely inserting the newly generated phase flow in the request into the position of the corresponding phase of the source information tree on one hand, and updating the reference relation and the counting of the phase reference on the other hand. This phase keeps information relating to this source in order to find the reference phase in the first phase.
And in the fourth stage, generating a request and sending the request to the processing end. At this time, the main function of the policy module is already finished, and the task of this stage is to send a protocol request for the selected processing end process, and the content of the request is basically consistent with the notify protocol sent by the upper level to the policy module. The policy module is added with the phase information quoted by the request, and can find the quoted phase flow according to the information and inform the request of which phase flows need to be generated.
The last part is deletion, and after receiving a delete command of the upper stage, the strategy module judges whether the session is deleted according to the reference stage and the output stage of the session. When the reference counts of the output phases are all 0, the session may be deleted.
The above description is a specific implementation flow of the policy module, that is, an implementation flow of the scheduling scheme.
A multi-stage flow scheduling system comprises a superior stage, a strategy end and a processing end, wherein a strategy module is operated to search the processing end, search the processing process of the processing end, update the information of a source resource tree and generate a request to send to the processing end; the processing end is composed of a plurality of servers with video processing.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.