[go: up one dir, main page]

CN114079749A - A cross-platform system for the field of manufacturing intelligent production business - Google Patents

A cross-platform system for the field of manufacturing intelligent production business Download PDF

Info

Publication number
CN114079749A
CN114079749A CN202010829448.XA CN202010829448A CN114079749A CN 114079749 A CN114079749 A CN 114079749A CN 202010829448 A CN202010829448 A CN 202010829448A CN 114079749 A CN114079749 A CN 114079749A
Authority
CN
China
Prior art keywords
stage
processing
flow
server
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010829448.XA
Other languages
Chinese (zh)
Inventor
王玉敏
柳鹏
葛健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Hanxin Technology Co ltd
Original Assignee
Shandong Hanxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Hanxin Technology Co ltd filed Critical Shandong Hanxin Technology Co ltd
Priority to CN202010829448.XA priority Critical patent/CN114079749A/en
Publication of CN114079749A publication Critical patent/CN114079749A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64723Monitoring of network processes or resources, e.g. monitoring of network load
    • H04N21/64738Monitoring network characteristics, e.g. bandwidth, congestion level

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer And Data Communications (AREA)

Abstract

本发明公开了一种面向制造业智能生产业务领域的跨平台系统,其方法包括首先在配置文件中将处理端信息配置好,包括处理端IP、PORT,以及C处理端的GPU标志;启动策略模块,策略模块读取配置文件,获取处理端A/B/C的IP、端口等,本调度方案充分利用各个阶段的码流,可以实现引用其他阶段流生成后续阶段需要的码流类型,或者阶段流在不同服务器间分发,然后充分利用服务器特性处理特定阶段流,譬如GPU服务器,可以特定处理解码、编码过程。这样的好处是减少某些阶段流重复生成工作,节省服务器性能,并且可以利用特定服务器处理特定阶段流,加快效率。The invention discloses a cross-platform system facing the field of manufacturing intelligent production business. The method comprises the following steps: firstly configuring processing terminal information in a configuration file, including processing terminal IP, PORT, and C processing terminal GPU flag; starting a strategy module , the policy module reads the configuration file, and obtains the IP, port, etc. of the processing end A/B/C. This scheduling scheme makes full use of the code streams of each stage, and can refer to other stages to generate the code stream type required by the subsequent stage, or the stage The stream is distributed among different servers, and then the server features are fully utilized to process the stream at a specific stage, such as a GPU server, which can specifically handle the decoding and encoding process. The advantage of this is to reduce the repetitive generation work of some stage flows, save server performance, and can utilize specific servers to process specific stage flows, speeding up efficiency.

Description

Cross-platform system for field of intelligent production business of manufacturing industry
Technical Field
The invention relates to the field of security protection, in particular to a cross-platform system for the field of intelligent production business of manufacturing industry.
Background
In the security industry, the distribution and transcoding functions of a traditional streaming media server are separately deployed, and in a traditional scene, the code streams of cameras are unified, and the popular trends are also unified. With the development of the security industry, a plurality of generations of cameras have appeared, the coding types are varied from mpeg2/mpeg4 and H264 to H265, and the coding packaging types required by the terminal playing device are also varied, which brings great docking trouble to platform integrators. For an integrator, integrating the code stream into a unified type output becomes necessary work.
In the prior art, a transcoding server is built, a scheduling module is used to find a server with the best state, and an input stream is transcoded and output to a front-end device for playing or distribution. However, different terminals require different types of code streams, for example, a PC terminal requires a PS type, a web page terminal requires an SRTP type, and at this time, the transcoding server needs to perform repeated transcoding on the same code stream to form different types, which results in wasted server performance. This is because transcoding is a multi-stage process, such as decoding, filtering, encoding, etc., and not only consists of input and output, but the existing scheduling schemes are all comprehensively considered for matching the input and output stages and server performance, and for streams in other stages, streams cannot be utilized or shared among servers, which causes great waste.
The existing scheduling schemes include DNS scheduling based on a flow request, LVS scheduling based on a network layer load, nginx scheduling based on an application layer, and distribution scheduling based on a code stream input/output stage, which are all schemes for balancing server loads or selecting a server with the best performance. Transcoding of code stream is a process of processing one path of stream to generate multi-stage stream, and the sequence is original stream, decapsulation, decoding, filtering, encoding and encapsulation. In the scheduling policy, not only the actual load capacity of the server but also the service correlation between a new request and an existing request, for example, a phase stream referencing the existing request, for example, a decoding phase stream, is considered to generate an encoded code stream required by the new request. A scheduling scheme that comprehensively considers such service correlation and server performance of each phase stream of a code stream is not available at present.
Disclosure of Invention
The invention aims to overcome the defects of the above situation and provide a technical scheme capable of solving the problems, and through the scheduling scheme, each stage stream of one path of stream can be fully utilized, the transcoding efficiency is improved, the server performance is saved, the server characteristic can be better utilized, and a special server can generate the code stream of a specific stage.
A cross-platform scheduling method for the field of intelligent production business of manufacturing industry specifically comprises the following steps:
s1, configuring information of the processing end in the configuration file, including the processing end server IP, the processing end monitoring port, and the server characteristics, such as performance parameters, extranet, GPU running state, etc.;
s2, the strategy module actively connects with the processing terminal through the tcp protocol, sends the check protocol, and obtains the basic information of the processing terminal server;
s3, when a superior notify request comes, the strategy module establishes a tree structure according to the source information of notify and stores the source information; judging the processing flow of the source code stream according to the target information, wherein the processing flow comprises one or more flows of decapsulation, decoding, filtering, coding and encapsulation, and storing the stage flows to a source information tree;
s4, searching one or more servers with best performance in all processing terminals, forwarding the notify request, informing which phase streams need to be generated, and keeping the phase streams completely consistent with the strategy module, wherein the reason for searching one or more servers is that a plurality of servers process different phases and then are integrated and output according to the characteristics of the servers, such as extranets, GPUs and the like, which are optimized;
s5, collecting the response time of the processing end as a judgment condition for selecting the processing end. If the server is overtime, special records are made, and alarm feedback is triggered;
s6, the processing end will detect the original code stream, if it is not matched with the strategy module, it will push the protocol to the strategy module, and the strategy will modify the source information;
s7, the superior notify request comes, according to the source id, searches the source information tree, according to the destination information of notify, searches the quote stage of the request, generates the destination flow, if the destination flow is generated for the first time, the parameter type of the stage flow is formed into the index, stored in the source information tree. If the source information tree is not found according to the source id, repeating the steps S3 to S6;
s8, the superior delete request comes, the strategy module deletes the reference count of the conversation, checks the reference count of each stage flow of the original flow, if all the reference counts are 0, the strategy module deletes the flow tree information structure, and issues the delete request to the processing end, deletes all the stage flows of the flow.
A multi-stage flow scheduling system comprises a superior stage, a strategy end and a processing end, wherein a strategy module is operated to search the processing end, search the processing process of the processing end, update the information of a source resource tree and generate a request to send to the processing end; the processing end is composed of a plurality of servers with video processing.
A multi-stage flow scheduling device comprises a computer hardware device and a network device, wherein the computer hardware device and the network device are used for operating a cross-platform scheduling method and a multi-stage flow scheduling system which are oriented to the field of intelligent production business of manufacturing industry.
The invention has the beneficial effects that: because the existing scheduling scheme performs matching distribution on the original stream and the encapsulated stream, the streams at the stages of decapsulation, decoding, filtering and encoding are not fully utilized, which is a waste. The scheduling scheme makes full use of the code stream of each stage, can realize that other stage streams are introduced to generate the code stream type required by the subsequent stage, or the stage streams are distributed among different servers, and then makes full use of the characteristics of the servers to process specific stage streams, such as GPU servers, and can specifically process decoding and encoding processes. This has the benefit of reducing the effort of re-generation of certain phase streams, conserving server performance, and allowing specific phase streams to be processed by specific servers, increasing efficiency.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Detailed Description
In the embodiment of the invention, the designed scheme is to access the audios and videos of a plurality of subways, and the audios and videos are illustrated according to the videos accessed to one line. The video source has three encoding formats of H264, MPEG4 and MPEG2, and the output encoding type requires H264, so transcoding is required. There are two types of PC end playing and web page playing, and different packaging formats need to be output. The platform deploys 3 processing ports A, B, C, where C is the GPU server, one policy module schedule.
Firstly, configuring the processing end information in a configuration file, wherein the processing end information comprises a processing end IP, a PORT and a GPU mark of a C processing end.
And starting a strategy module, reading the configuration file by the strategy module, and acquiring the IP and the port of the processing end A/B/C. And connecting the processing end, establishing a connection by using each process of the strategy server and the processing end, storing the connections by using the strategy module, sending heartbeat information at regular time, and keeping long connection. The processing end is used for multi-process processing, and establishes a connection with each process, so as to balance the processing capacity of the processes and make the performance of the processes maximally utilized.
The strategy module sends a check protocol to the processing end, which is a private protocol, and the purpose obtains global information such as cpu utilization, cpu core number, memory utilization, network card performance, network uplink/downlink flow and the like of the processing end, and information such as cpu utilization, process connection number and the like of each process. The strategy selects the server with the best performance, and the judgment is made according to the basic information of the servers.
The strategy module is provided with a global red and black tree, and stores the basic information of the source according to the unique id of the video source as an index. The basic information of the source includes corresponding processing ends of an original stream, an access decapsulated stream, a decoded stream, a filter stream, a coded stream and an encapsulated stream, and reference count of each stage stream, and it is ensured in principle that each stream is generated only once, the original stream, the decapsulated stream and the decoded stream can only be of one type, and the filter, the code and the encapsulation can be of multiple types, for example, the code may be H264, H265, etc., therefore, when the stage streams of the filter, the code and the encapsulation are saved, three independent red-black trees need to be generated, and indexes are taken according to corresponding filter parameters, coding parameters and encapsulation parameters. This is the basic data structure of the policy module.
The operation of the strategy module is mainly divided into four stages, and the strategy module is operated successively. The first stage is to search the processing end, the second stage is to search the processing progress of the processing end, the third stage is to update the information of the source resource tree, and the fourth stage is to generate a request to be sent to the processing end.
In the first stage, the processing end is searched. Firstly, searching on a global red and black tree according to a source unique id, then calculating a final outflow stage according to destination information of a notify protocol, wherein a search index is constructed mainly according to the destination information of the notify protocol, a reference stage is searched from a resource tree, a processing end for processing the stage flow is obtained according to the reference stage information, for example, a notify request has filter parameters, the request needs four processes of decoding, filtering, encoding and packaging, the final packaging parameter index is searched in packaging red and black of the resource tree, if the request has the filter parameters, the search is finished, the packaging flow is referenced, otherwise, the coding red and black tree, the filter red and black tree, the decoding stage, the decapsulating stage and the original flow are sequentially searched for the reference stage, and therefore, the minimum reference stage is the original flow. And then judging the current performance of the processing end, if the performance does not meet the condition, sending the stage flow to other servers for processing, and referring the flow which is referred to the other servers as an external reference flow, wherein the condition can be generated only under the condition that the processing end cannot process the flow. The external reference stream also has an application, the access can only be once for the hot video, a large number of requests are input, the strategy module distributes the internal of the stream, and then the stream is sent to different terminals by different processing terminals. If transcoding processing exists in the analyzed notify destination information, a GPU server is preferentially selected for processing, the transcoding processing is accessed by a processing end A, the transcoding processing is internally forwarded to a processing end C (GPU), the stream is processed to a coding stage, then the stream is sent to a server A or a server B, a packaging stream is generated, and the stream is sent to a terminal. The A/B server processes an input/output stage, the high broadband characteristic of the server is utilized to mainly perform distribution processing, the C processing end is responsible for a transcoding function, and the GPU characteristic is utilized to efficiently transcode. If the external network request exists, the external network processing terminal is required to be selected for processing, and the requirement is not met in the project.
Formula for judging server performance:
Figure 808666DEST_PATH_IMAGE001
performance index parameter of processor
I, a rejection condition, wherein 0 represents that the rejection condition is satisfied, and 1 represents that the rejection condition is not satisfied, such as an extranet server;
s (i) weight;
x (i) attribute value, cpu utilization rate, memory utilization rate, etc.;
pre: priority, the priority value is Base, the non-priority value is 0;
base: the Base number is the same order of magnitude, when the Base number is 0-1, the Base number is 1, and when the Base number is 0-100, the Base number is 100;
and in the second stage, on the basis of selecting one or more processing terminals in the first stage, a processing process is selected for the processing terminals. When selecting the process, the cpu utilization rate and the network connection number ratio (the current network connection number is divided by the maximum connection number of a single process) of the process are mainly referred to. The strategy module respectively sets a weight for the two parts, and the concept of the weight is to represent the proportion of the final performance value. For example, the cpu usage is weighted to 70 and the network connection number ratio is 30, and we consider that cpu accounts for 70% of the process selection. The value of the weight is not fixed, and during the use of the project, information is collected and then the appropriate weight is analyzed.
And in the third stage, the information of the source resource tree is updated. From the destination information in notify, it can be calculated which phases the request needs to refer to and which phases will be generated, and the information needs to be saved. If the source is a new source, a red-black tree of the source needs to be created and initialized, and a filter red-black tree, a coding red-black tree and a packaging red-black tree of the source are created at the same time for storing information of the generated stream of each stage of the source. And updating the resource tree information, namely inserting the newly generated phase flow in the request into the position of the corresponding phase of the source information tree on one hand, and updating the reference relation and the counting of the phase reference on the other hand. This phase keeps information relating to this source in order to find the reference phase in the first phase.
And in the fourth stage, generating a request and sending the request to the processing end. At this time, the main function of the policy module is already finished, and the task of this stage is to send a protocol request for the selected processing end process, and the content of the request is basically consistent with the notify protocol sent by the upper level to the policy module. The policy module is added with the phase information quoted by the request, and can find the quoted phase flow according to the information and inform the request of which phase flows need to be generated.
The last part is deletion, and after receiving a delete command of the upper stage, the strategy module judges whether the session is deleted according to the reference stage and the output stage of the session. When the reference counts of the output phases are all 0, the session may be deleted.
The above description is a specific implementation flow of the policy module, that is, an implementation flow of the scheduling scheme.
A multi-stage flow scheduling system comprises a superior stage, a strategy end and a processing end, wherein a strategy module is operated to search the processing end, search the processing process of the processing end, update the information of a source resource tree and generate a request to send to the processing end; the processing end is composed of a plurality of servers with video processing.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (9)

1.一种面向制造业智能生产业务领域的跨平台调度方法,其特征在于,具体包括以下步骤:1. a cross-platform scheduling method for the field of manufacturing intelligent production business, is characterized in that, specifically comprises the following steps: S1,在配置文件中配置处理端的信息,包括处理端服务器IP,处理端监听端口,以及服务器特性;S1, configure the information of the processing terminal in the configuration file, including the processing terminal server IP, the processing terminal listening port, and the server characteristics; S2,策略模块通过tcp协议主动连接处理端,发送check协议,获取处理端服务器的基本信息;S2, the policy module actively connects to the processing terminal through the tcp protocol, sends the check protocol, and obtains the basic information of the processing terminal server; S3,上级notify请求到来,策略模块根据notify的源信息,建立树结构,保存源信息;根据目的信息,判断对源码流的处理流程,并将这些阶段流保存到源信息树上;S3, the upper-level notify request arrives, the policy module establishes a tree structure according to the source information of notify, and saves the source information; according to the destination information, judges the processing flow of the source code stream, and saves these stage streams to the source information tree; S4,在所有处理端中查找一个或多个性能最佳的服务器,将步骤S3的notify请求转发至该服务器,并告知需要生成哪些阶段流,并与策略模块保持完全一致;S4, find one or more servers with the best performance in all processing ends, forward the notify request of step S3 to the server, and tell which stage flows need to be generated, and keep it completely consistent with the policy module; S5,收集处理端的响应时间,作为选择处理端的一个判断条件;S5, collect the response time of the processing terminal as a judgment condition for selecting the processing terminal; S6,处理端对原始码流做探测处理,当与策略模块不匹配,推送协议到策略模块,策略修改源信息;S6, the processing end performs detection processing on the original code stream, when it does not match the policy module, pushes the protocol to the policy module, and the policy modifies the source information; S7,上级notify请求到来,根据源id,查找源信息树,根据notify的目的信息,查找此请求的引用阶段,生成目的流;S7, the upper-level notify request arrives, according to the source id, find the source information tree, according to the purpose information of notify, find the reference stage of this request, and generate the destination stream; S8,上级delete请求到来,策略模块删除此会话的引用计数,查看原始流的各个阶段流的引用计数,如果都为0,策略模块删除此流树信息结构,并下发delete请求到处理端,删除此流的所有阶段流。S8, the upper-level delete request arrives, the policy module deletes the reference count of this session, and checks the reference count of the flow in each stage of the original flow. If all are 0, the policy module deletes the flow tree information structure and sends the delete request to the processing end. Deletes all stage flows for this flow. 2.根据权利要求1所述的面向制造业智能生产业务领域的跨平台调度方法,其特征在于,步骤S3中源码流的处理流程包含解封装、解码、滤镜、编码、封装中的一个或多个流程。2. The cross-platform scheduling method oriented to the field of manufacturing intelligent production business according to claim 1, wherein the processing flow of the source code stream in step S3 comprises one of decapsulation, decoding, filtering, encoding, and encapsulation. multiple processes. 3.根据权利要求1所述的面向制造业智能生产业务领域的跨平台调度方法,其特征在于,步骤S5中当服务器超时,还包括触发报警反馈。3 . The cross-platform scheduling method oriented to the field of manufacturing intelligent production business according to claim 1 , wherein, in step S5 , when the server times out, it further includes triggering an alarm feedback. 4 . 4.一种多阶段流调度系统,包括上级、策略端以及处理端,其中策略模块运行用于查找处理端,查找处理端的处理进程,更新源资源树的信息以及生成请求发送到处理端;处理端由多台具备视频处理的服务器构成。4. A multi-stage flow scheduling system, comprising a superior, a policy terminal and a processing terminal, wherein the policy module operates to find the processing terminal, find the processing process of the processing terminal, update the information of the source resource tree and send a generation request to the processing terminal; processing; The terminal consists of multiple servers with video processing. 5.一种多阶段流调度装置,其特征在于,包括用于运行面向制造业智能生产业务领域的跨平台调度方法以及多阶段流调度系统的计算机硬件设备以及网络设备。5. A multi-stage flow scheduling device, characterized by comprising computer hardware equipment and network equipment for running a cross-platform scheduling method and a multi-stage flow scheduling system oriented to the field of manufacturing intelligent production business. 6.根据权利要求1所述的面向制造业智能生产业务领域的跨平台调度方法,其特征在于,步骤S5中当服务器超时,做特殊记录。6 . The cross-platform scheduling method oriented to the business field of manufacturing intelligent production according to claim 1 , characterized in that in step S5 , when the server times out, a special record is made. 7 . 7.根据权利要求1所述的面向制造业智能生产业务领域的跨平台调度方法,其特征在于,步骤S7中,如果根据源id没有查找到源信息树,重复第步骤S3至步骤S6。7. The cross-platform scheduling method oriented to the field of manufacturing intelligent production business according to claim 1, wherein in step S7, if the source information tree is not found according to the source id, step S3 to step S6 are repeated. 8.根据权利要求1所述的面向制造业智能生产业务领域的跨平台调度方法,其特征在于,步骤S7中,如果此目的流,是第一次生成,将此阶段流的参数类型组成索引,保存的源信息树。8. The cross-platform scheduling method oriented to the field of manufacturing intelligent production business according to claim 1, wherein in step S7, if this purpose flow is generated for the first time, the parameter type of this stage flow is formed into an index , the saved source information tree. 9.根据权利要求1所述的面向制造业智能生产业务领域的跨平台调度方法,其特征在于,步骤S2中,处理端服务器的基本信息,包含性能参数、外网、GPU运行状态。9 . The cross-platform scheduling method oriented to the business field of manufacturing intelligent production according to claim 1 , characterized in that, in step S2 , the basic information of the end server is processed, including performance parameters, external network, and GPU running status. 10 .
CN202010829448.XA 2020-08-18 2020-08-18 A cross-platform system for the field of manufacturing intelligent production business Pending CN114079749A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010829448.XA CN114079749A (en) 2020-08-18 2020-08-18 A cross-platform system for the field of manufacturing intelligent production business

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010829448.XA CN114079749A (en) 2020-08-18 2020-08-18 A cross-platform system for the field of manufacturing intelligent production business

Publications (1)

Publication Number Publication Date
CN114079749A true CN114079749A (en) 2022-02-22

Family

ID=80281256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010829448.XA Pending CN114079749A (en) 2020-08-18 2020-08-18 A cross-platform system for the field of manufacturing intelligent production business

Country Status (1)

Country Link
CN (1) CN114079749A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080162713A1 (en) * 2006-12-27 2008-07-03 Microsoft Corporation Media stream slicing and processing load allocation for multi-user media systems
CN101707543A (en) * 2009-11-30 2010-05-12 北京中科大洋科技发展股份有限公司 Enterprise media bus system supporting multi-task type and enterprise media bus method supporting multi-task type
CN102802053A (en) * 2012-07-23 2012-11-28 深圳市融创天下科技股份有限公司 Audio and video file transcoding cluster dispatching method and device
CN107070686A (en) * 2016-12-23 2017-08-18 武汉烽火众智数字技术有限责任公司 A kind of system and method for the parallel transcoding of video monitoring platform code stream
CN109213593A (en) * 2017-07-04 2019-01-15 阿里巴巴集团控股有限公司 Resource allocation methods, device and equipment for panoramic video transcoding
CN110868610A (en) * 2019-10-25 2020-03-06 富盛科技股份有限公司 Streaming media transmission method and device and server
CN111613234A (en) * 2020-05-29 2020-09-01 富盛科技股份有限公司 Multi-stage flow scheduling method, system and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080162713A1 (en) * 2006-12-27 2008-07-03 Microsoft Corporation Media stream slicing and processing load allocation for multi-user media systems
CN101707543A (en) * 2009-11-30 2010-05-12 北京中科大洋科技发展股份有限公司 Enterprise media bus system supporting multi-task type and enterprise media bus method supporting multi-task type
CN102802053A (en) * 2012-07-23 2012-11-28 深圳市融创天下科技股份有限公司 Audio and video file transcoding cluster dispatching method and device
CN107070686A (en) * 2016-12-23 2017-08-18 武汉烽火众智数字技术有限责任公司 A kind of system and method for the parallel transcoding of video monitoring platform code stream
CN109213593A (en) * 2017-07-04 2019-01-15 阿里巴巴集团控股有限公司 Resource allocation methods, device and equipment for panoramic video transcoding
CN110868610A (en) * 2019-10-25 2020-03-06 富盛科技股份有限公司 Streaming media transmission method and device and server
CN111613234A (en) * 2020-05-29 2020-09-01 富盛科技股份有限公司 Multi-stage flow scheduling method, system and device

Similar Documents

Publication Publication Date Title
CN111613234B (en) Multi-stage flow scheduling method, system and device
RU2586639C2 (en) Method and apparatus encoding or decoding
CN103957341B (en) The method of picture transfer and relevant device thereof
Bouaafia et al. Deep learning-based video quality enhancement for the new versatile video coding
CN114845134B (en) File packaging method, file transmission method, file decoding method and related equipment
US11089334B1 (en) Methods and systems for maintaining quality of experience in real-time live video streaming
CN103428494A (en) Image sequence coding and recovering method based on cloud computing platform
CN101326502B (en) Compression/decompression frame for masthead of a newspaper
CN115643310A (en) Method, device and system for compressing data
CN101272378A (en) Method and system for processing session initiation protocol messages
Zakerinasab et al. Dependency-aware distributed video transcoding in the cloud
WO2024217150A1 (en) Video processing method and apparatus, computer device, storage medium and program product
US20070051817A1 (en) Image processing apparatus and image processing method
CN108632679B (en) A kind of method that multi-medium data transmits and a kind of view networked terminals
CN113676738A (en) Geometric encoding and decoding method and device for three-dimensional point cloud
WO2024169391A1 (en) Video data processing method and apparatus, and computer device and storage medium
CN110167193A (en) WiFi matches network method and WiFi equipment automatically
CN114079749A (en) A cross-platform system for the field of manufacturing intelligent production business
CN110868610B (en) Streaming media transmission method, device, server and storage medium
CN105323593B (en) A kind of multi-media transcoding dispatching method and device
US20080267284A1 (en) Moving picture compression apparatus and method of controlling operation of same
CN111935467A (en) Outer projection arrangement of virtual reality education and teaching
CN115599744A (en) File transcoding method, device, storage medium and electronic device
CN104580420A (en) Trans-IDC (internet data center) data transmission system and method
CN106776794B (en) Mass data processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220222