CN115002041A - Node balance scheduling method, device, equipment and storage medium - Google Patents
Node balance scheduling method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN115002041A CN115002041A CN202210612292.9A CN202210612292A CN115002041A CN 115002041 A CN115002041 A CN 115002041A CN 202210612292 A CN202210612292 A CN 202210612292A CN 115002041 A CN115002041 A CN 115002041A
- Authority
- CN
- China
- Prior art keywords
- node
- server
- client
- response information
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
- H04L47/125—Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer And Data Communications (AREA)
Abstract
Description
技术领域technical field
本发明涉及通信技术领域,特别涉及一种节点均衡调度方法、装置、设备、存储介质。The present invention relates to the field of communication technologies, and in particular, to a node balanced scheduling method, device, equipment and storage medium.
背景技术Background technique
TCP(Transmission Control Protocol,传输控制协议)协议是为了在不可靠的互联网络上提供可靠的端到端字节流而专门设计的一个传输协议。TCP的正式定义由1981年9月的RFC793给出。随着时间的推移,已经对其做了许多改进,各种错误和不一致的地方逐渐被修复。TCP协议工作在七层网络模型中的传输层,它是一种面向广域网的通信协议,目的是在跨越多个网络通信时,为两个通信端点之间提供一条具有下列特点的通信方式:基于流的方式;面向连接;可靠通信方式;在网络状况不佳的时候尽量降低系统由于重传带来的带宽开销;通信连接维护是面向通信的两个端点的,而不考虑中间网段和节点。为了保证TCP连接的可靠,当客户端希望与服务端之间建立TCP连接时,需要经历“三次握手”的过程:第一次握手:一开始,客户端和服务端都处于CLOSED状态。当服务端主动监听某个端口时,将处于LISTEN状态。客户端生成一个随机的client_isn(初始化序号),将此序号置于TCP首部的序号字段中,同时把SYN标志位置为1,表示SYN报文。把第一个SYN报文发送给服务端,表示向服务端发起连接,该报文不包含应用层数据,之后客户端处于SYN-SENT状态。第二次握手:服务端在收到来自客户端的SYN报文后,服务端也随机初始化自己的server_isn(序号),将此序号填入TCP首部的序号字段中,其次把TCP首部的确认应答号字段填入client_isn+1,接着把SYN和ACK标志位置为1。最后把该报文发给客户端,该报文也不包含应用层数据,之后服务端处于SYN-RCVD状态。第三次握手:客户端在收到服务端报文后,还要向服务端回应最后一个应答报文,首先该应答报文TCP首部ACK标志位置为1,其次,确认应答号字段填入server_isn+1,最后把报文发送给服务端,这次报文可以携带客户到服务器的数据,之后客户端处于ESTABLISHED状态。服务端在收到客户端的应答报文后,也进入ESTABLISHED状态。The TCP (Transmission Control Protocol, Transmission Control Protocol) protocol is a transmission protocol specially designed to provide a reliable end-to-end byte stream on an unreliable Internet. The formal definition of TCP is given by RFC793, September 1981. Over time, many improvements have been made to it, and various bugs and inconsistencies have been gradually fixed. The TCP protocol works in the transport layer of the seven-layer network model. It is a communication protocol for WAN. Its purpose is to provide a communication method with the following characteristics between two communication endpoints when communicating across multiple networks: Based on Streaming method; connection-oriented; reliable communication method; try to reduce the bandwidth overhead of the system due to retransmission when the network condition is not good; communication connection maintenance is oriented to the two endpoints of communication, regardless of intermediate network segments and nodes . In order to ensure the reliability of the TCP connection, when the client wants to establish a TCP connection with the server, it needs to go through the process of "three-way handshake": the first handshake: At the beginning, both the client and the server are in the CLOSED state. When the server actively listens to a port, it will be in the LISTEN state. The client generates a random client_isn (initialization sequence number), places the sequence number in the sequence number field of the TCP header, and sets the SYN flag to 1, indicating a SYN message. The first SYN message is sent to the server, indicating that a connection is initiated to the server. The message does not contain application layer data, and then the client is in the SYN-SENT state. The second handshake: After the server receives the SYN message from the client, the server also randomly initializes its own server_isn (serial number), fills the serial number into the serial number field of the TCP header, and then sets the confirmation response number of the TCP header. Fill in the field client_isn+1, and then set the SYN and ACK flags to 1. Finally, the message is sent to the client, and the message does not contain application layer data, and then the server is in the SYN-RCVD state. The third handshake: After the client receives the server message, it also responds to the server with the last response message. First, the ACK flag in the TCP header of the response message is set to 1, and secondly, the confirmation response number field is filled in server_isn +1, finally send the message to the server, this time the message can carry the data from the client to the server, and then the client is in the ESTABLISHED state. After receiving the response message from the client, the server also enters the ESTABLISHED state.
现有技术中,通常根据连接请求内容和服务类型的不同,各节点的工作负载往往表现出不均衡性,而且这种不均衡性是难以避免的,也是随时间不断变化的。对于TCP协议传输数据时,存在以下几种常用的分布式任务调度算法:轮转法:轮转算法是所有算法中最简单也最容易实现的一种方法,轮转法简单地在一串节点中线性轮转,平衡器将新请求发给节点表中的下一个节点。如此连续下去。这个算法在DNS域名轮询中被广泛使用。但是简单应用轮转法DNS转换,可能造成持续访问同一节点,从而干扰正常的网络负载平衡,使网络平衡系统无法高效工作。轮转法典型适用于集群中所有节点的处理能力和性能均相同的情况,在实际应用中,一般将它与其他简单方法联合使用时比较有效。加权法:加权算法根据节点的优先级或权值来分配负载。权值是基于各节点能力的假设或估计值。加权方法智能与其它方法结合使用,是它们一个很好的补充。散列法:散列法也叫哈希法(Hash),通过单射不可逆的Hash函数,按照某种规则将网络请求发往集群节点。与简单加权法相似。最少连接法:针对TCP连接进行在最少连接法中,管理节点纪录目前所有活跃连接,把下一个新的请求发给当前含有最少连接数的节点。缺陷是某些应用层会话要消耗更多的系统资源,尽管集群中连接数平衡了,但是处理量可能差别很大,连接数无法反映出真实的应用负载。最低缺失法:在最低缺失法中,管理节点长期纪录到各节点的请求情况,把下个请求发给历史上处理请求最少的节点。与最少连接法不同的是,最低缺失记录过去的连接数而不是当前的连接数。最快响应法:管理节点记录自身到每一个集群节点的网络响应时间,并将下一个到达的连接请求分配给响应时间最短的节点。在大多数基于LAN的集群中,最快响应算法工作的并不是很好,大多数与以太网连接的现代系统,有部分负载时,可在1ms或更短时间内响应,这使得这种方法没有意义。In the prior art, usually according to the content of the connection request and the type of service, the workload of each node is often unbalanced, and this unbalance is unavoidable and changes over time. When transmitting data with the TCP protocol, there are several commonly used distributed task scheduling algorithms: Round-robin method: Round-robin algorithm is the simplest and easiest to implement among all algorithms. Round-robin method simply rotates linearly in a series of nodes. , the balancer sends the new request to the next node in the node table. And so on. This algorithm is widely used in DNS domain name round robin. However, simply applying round-robin DNS translation may result in continuous access to the same node, which interferes with normal network load balancing and makes the network balancing system unable to work efficiently. The round-robin method is typically suitable for situations where the processing power and performance of all nodes in the cluster are the same. In practical applications, it is generally more effective to use it in combination with other simple methods. Weighted method: The weighted algorithm distributes the load according to the priority or weight of the nodes. Weights are assumptions or estimates based on the capabilities of each node. The weighted method intelligence is used in conjunction with other methods and is a good complement to them. Hash method: Hash method is also called Hash method. It sends network requests to cluster nodes according to certain rules through a single-shot irreversible Hash function. Similar to simple weighting. Least connection method: For TCP connections, in the least connection method, the management node records all current active connections, and sends the next new request to the node with the least number of connections. The disadvantage is that some application layer sessions consume more system resources. Although the number of connections in the cluster is balanced, the processing volume may vary greatly, and the number of connections cannot reflect the real application load. Minimum missing method: In the minimum missing method, the management node records the request status of each node for a long time, and sends the next request to the node that has processed the fewest requests in history. Unlike the least join method, the minimum missing records the number of past joins rather than the current number of joins. Fastest response method: The management node records the network response time from itself to each cluster node, and assigns the next incoming connection request to the node with the shortest response time. In most LAN based clusters, the fastest response algorithm does not work very well, most modern systems connected to Ethernet, with partial load, can respond in 1ms or less, which makes this method Pointless.
综上,如何利用这种节点的工作负载的不均衡性,根据负载的动态变化来自动进行节点调度,在各节点间达到一种自适应的平衡,并且避免节点间相互通讯,实现简单、开销更小的节点调度是本领域有待解决的问题。In summary, how to use the unbalanced workload of such nodes to automatically schedule nodes according to the dynamic changes of the load, achieve an adaptive balance between nodes, and avoid mutual communication between nodes, which is simple and expensive to achieve Smaller node scheduling is an open problem in the art.
发明内容SUMMARY OF THE INVENTION
有鉴于此,本发明的目的在于提供一种节点均衡调度方法、装置、设备、存储介质,能够利用这种节点的工作负载的不均衡性,根据负载的动态变化来自动进行节点调度,在各节点间达到一种自适应的平衡,并且避免节点间相互通讯,实现简单、开销更小的节点调度。其具体方案如下:In view of this, the purpose of the present invention is to provide a node balanced scheduling method, device, equipment, and storage medium, which can make use of the unbalanced workload of such nodes to automatically perform node scheduling according to the dynamic changes of the load. An adaptive balance is achieved between nodes, and mutual communication between nodes is avoided to achieve simple node scheduling with less overhead. Its specific plan is as follows:
第一方面,本申请公开了一种节点均衡调度方法,应用于服务器集群,包括:In a first aspect, the present application discloses a node balanced scheduling method, applied to a server cluster, including:
获取客户端发送的连接请求;Get the connection request sent by the client;
将所述连接请求发送到各服务器节点,以便各所述服务器节点分别按照自身请求队列中的请求顺序对所述连接请求进行处理,并产生相应的应答信息;Sending the connection request to each server node, so that each server node processes the connection request according to the request sequence in its own request queue, and generates corresponding response information;
将各所述服务器节点的所述应答信息反馈至所述客户端,以便所述客户端筛选出反馈时间最短的所述服务器节点作为目标节点,并将待处理数据发送至所述目标节点,以便所述目标节点对所述待处理数据进行处理。Feeding back the response information of each server node to the client, so that the client selects the server node with the shortest feedback time as the target node, and sends the data to be processed to the target node, so that The target node processes the data to be processed.
可选的,所述获取客户端发送的连接请求,包括:Optionally, the obtaining the connection request sent by the client includes:
获取客户端发送的包含与所述服务器集群中的预设网卡绑定的公共互联网协议地址的连接请求。Obtain a connection request sent by the client including the public Internet Protocol address bound to the preset network card in the server cluster.
可选的,所述将所述连接请求发送到各服务器节点,包括:Optionally, the sending the connection request to each server node includes:
通过前端总线将所述连接请求发送到各服务器节点的同构网卡。The connection request is sent to the homogeneous network card of each server node through the front-side bus.
可选的,所述各所述服务器节点分别按照自身请求队列中的请求顺序对所述连接请求进行处理,并产生相应的应答信息,包括:Optionally, each of the server nodes respectively processes the connection request according to the request sequence in its own request queue, and generates corresponding response information, including:
基于各所述服务器节点的负载情况以及请求顺序对所述连接请求进行延时处理,并产生相应的应答信息。Delay processing is performed on the connection request based on the load situation of each server node and the request sequence, and corresponding response information is generated.
可选的,所述将各所述服务器节点的所述应答信息反馈至所述客户端,以便所述客户端筛选出反馈时间最短的所述服务器节点作为目标节点之后,还包括:Optionally, after feeding back the response information of each server node to the client, so that the client selects the server node with the shortest feedback time as the target node, the method further includes:
获取所述客户端在筛选到所述目标节点后反馈的通知消息;Obtain the notification message fed back by the client after the target node is screened;
删除所述各服务器节点中除所述目标节点之外的其余服务器节点反馈的所述应答信息。Delete the response information fed back by other server nodes except the target node among the server nodes.
可选的,所述将各所述服务器节点的所述应答信息反馈至所述客户端,以便所述客户端筛选出反馈时间最短的所述服务器节点作为目标节点,并将待处理数据发送至所述目标节点之后,还包括:Optionally, the response information of each server node is fed back to the client, so that the client selects the server node with the shortest feedback time as the target node, and sends the data to be processed to the client. After the target node, it also includes:
通过所述目标节点对所述待处理数据的序列号与所述应答信息中的序列号进行比对,若比对结果一致,则对所述待处理数据进行处理。The target node compares the sequence number of the data to be processed with the sequence number in the response information, and if the comparison results are consistent, the data to be processed is processed.
可选的,所述通过所述目标节点对所述待处理数据的序列号与所述应答信息中的序列号进行比对之后,还包括:Optionally, after the target node compares the sequence number of the data to be processed with the sequence number in the response information, the method further includes:
若比对结果不一致,则向传输控制协议栈发送撤销数据包,撤销与客户端的连接。If the comparison results are inconsistent, a cancellation data packet is sent to the transmission control protocol stack to cancel the connection with the client.
第二方面,本申请公开了一种节点均衡调度装置,应用于服务器集群,包括:In a second aspect, the present application discloses a node balancing scheduling device, which is applied to a server cluster, including:
请求获取模块,用于获取客户端发送的连接请求;The request acquisition module is used to acquire the connection request sent by the client;
请求处理模块,用于将所述连接请求发送到各服务器节点,以便各所述服务器节点分别按照自身请求队列中的请求顺序对所述连接请求进行处理,并产生相应的应答信息;a request processing module, configured to send the connection request to each server node, so that each server node processes the connection request according to the request sequence in its own request queue, and generates corresponding response information;
节点调度模块,用于将各所述服务器节点的所述应答信息反馈至所述客户端,以便所述客户端筛选出反馈时间最短的所述服务器节点作为目标节点,并将待处理数据发送至所述目标节点,以便所述目标节点对所述待处理数据进行处理。A node scheduling module is used to feed back the response information of each server node to the client, so that the client selects the server node with the shortest feedback time as the target node, and sends the data to be processed to the target node, so that the target node processes the data to be processed.
第三方面,本申请公开了一种电子设备,包括:In a third aspect, the present application discloses an electronic device, comprising:
存储器,用于保存计算机程序;memory for storing computer programs;
处理器,用于执行所述计算机程序,以实现前述公开的节点均衡调度方法的步骤。The processor is configured to execute the computer program to implement the steps of the node balance scheduling method disclosed above.
第四方面,本申请公开了一种计算机可读存储介质,用于存储计算机程序;其中,所述计算机程序被处理器执行时实现前述公开的节点均衡调度方法的步骤。In a fourth aspect, the present application discloses a computer-readable storage medium for storing a computer program; wherein, when the computer program is executed by a processor, the steps of the aforementioned method for balanced scheduling of nodes are implemented.
可见,本申请公开了一种节点均衡调度方法,应用于服务器集群,包括:获取客户端发送的连接请求;将所述连接请求发送到各服务器节点,以便各所述服务器节点分别按照自身请求队列中的请求顺序对所述连接请求进行处理,并产生相应的应答信息;将各所述服务器节点的所述应答信息反馈至所述客户端,以便所述客户端筛选出反馈时间最短的所述服务器节点作为目标节点,并将待处理数据发送至所述目标节点,以便所述目标节点对所述待处理数据进行处理。由此可见,本申请通过利用各节点的工作负载表现出的不均衡性,当收到客户端发送的连接请求后,根据各个节点的负载情况,在进行应答信息的反馈时会产生一定的时间延迟,其中,负载越大,时间延迟越长,反馈应答信息的时间也越长,因此根据节点负载的动态变化来自动进行实时节点调度,在各节点间达到一种自适应的平衡。而且不需要节点间相互通讯,因此实现简单,开销更小的节点调度。It can be seen that the present application discloses a node balanced scheduling method, which is applied to a server cluster, including: obtaining a connection request sent by a client; sending the connection request to each server node, so that each server node can queue according to its own request. The connection request is processed in the request sequence in the server node, and corresponding response information is generated; The server node acts as a target node, and sends the data to be processed to the target node, so that the target node can process the data to be processed. It can be seen that, by utilizing the unbalanced workload of each node, after receiving the connection request sent by the client, according to the load of each node, a certain time will be generated when the response information is fed back. Delay, among which, the larger the load, the longer the time delay, and the longer the time to feedback the response information, so the real-time node scheduling is automatically performed according to the dynamic change of the node load, and an adaptive balance is achieved among the nodes. And it does not need to communicate with each other between nodes, so the implementation is simple and the node scheduling with less overhead.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only It is an embodiment of the present invention. For those of ordinary skill in the art, other drawings can also be obtained according to the provided drawings without creative work.
图1为本申请公开的一种节点均衡调度方法流程图;FIG. 1 is a flowchart of a node balanced scheduling method disclosed in the present application;
图2为本申请公开的一种基于TCP协议的三次握手的过程示意图;2 is a schematic diagram of a process of a three-way handshake based on the TCP protocol disclosed in the application;
图3为本申请公开的一种服务器群节点调度原理图;FIG. 3 is a schematic diagram of a node scheduling principle of a server group disclosed in the present application;
图4为本申请公开的一种具体的节点均衡调度方法流程图;4 is a flowchart of a specific node balancing scheduling method disclosed in the present application;
图5为本申请公开的一种节点均衡调度装置结构示意图;FIG. 5 is a schematic structural diagram of a node balance scheduling apparatus disclosed in the present application;
图6为本申请公开的一种电子设备结构图。FIG. 6 is a structural diagram of an electronic device disclosed in this application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
现有技术中,通常根据连接请求内容和服务类型的不同,各节点的工作负载往往表现出不均衡性,而且这种不均衡性是难以避免的,也是随时间不断变化的。对于TCP协议传输数据时,存在以下几种常用的分布式任务调度算法:轮转法:轮转算法是所有算法中最简单也最容易实现的一种方法,轮转法简单地在一串节点中线性轮转,平衡器将新请求发给节点表中的下一个节点。如此连续下去。这个算法在DNS域名轮询中被广泛使用。但是简单应用轮转法DNS转换,可能造成持续访问同一节点,从而干扰正常的网络负载平衡,使网络平衡系统无法高效工作。轮转法典型适用于集群中所有节点的处理能力和性能均相同的情况,在实际应用中,一般将它与其他简单方法联合使用时比较有效。加权法:加权算法根据节点的优先级或权值来分配负载。权值是基于各节点能力的假设或估计值。加权方法智能与其它方法结合使用,是它们一个很好的补充。散列法:散列法也叫哈希法(Hash),通过单射不可逆的Hash函数,按照某种规则将网络请求发往集群节点。与简单加权法相似。最少连接法:针对TCP连接进行在最少连接法中,管理节点纪录目前所有活跃连接,把下一个新的请求发给当前含有最少连接数的节点。缺陷是某些应用层会话要消耗更多的系统资源,尽管集群中连接数平衡了,但是处理量可能差别很大,连接数无法反映出真实的应用负载。最低缺失法:在最低缺失法中,管理节点长期纪录到各节点的请求情况,把下个请求发给历史上处理请求最少的节点。与最少连接法不同的是,最低缺失记录过去的连接数而不是当前的连接数。最快响应法:管理节点记录自身到每一个集群节点的网络响应时间,并将下一个到达的连接请求分配给响应时间最短的节点。在大多数基于LAN的集群中,最快响应算法工作的并不是很好,大多数与以太网连接的现代系统,有部分负载时,可在1ms或更短时间内响应,这使得这种方法没有意义。In the prior art, usually according to the content of the connection request and the type of service, the workload of each node is often unbalanced, and this unbalance is unavoidable and changes over time. When transmitting data with the TCP protocol, there are several commonly used distributed task scheduling algorithms: Round-robin method: Round-robin algorithm is the simplest and easiest to implement among all algorithms. Round-robin method simply rotates linearly in a series of nodes. , the balancer sends the new request to the next node in the node table. And so on. This algorithm is widely used in DNS domain name round robin. However, simply applying round-robin DNS translation may result in continuous access to the same node, which interferes with normal network load balancing and makes the network balancing system unable to work efficiently. The round-robin method is typically suitable for situations where the processing power and performance of all nodes in the cluster are the same. In practical applications, it is generally more effective to use it in combination with other simple methods. Weighted method: The weighted algorithm distributes the load according to the priority or weight of the nodes. Weights are assumptions or estimates based on the capabilities of each node. The weighted method intelligence is used in conjunction with other methods and is a good complement to them. Hash method: Hash method is also called Hash method. It sends network requests to cluster nodes according to certain rules through a single-shot irreversible Hash function. Similar to simple weighting. Least connection method: For TCP connections, in the least connection method, the management node records all current active connections, and sends the next new request to the node with the least number of connections. The disadvantage is that some application layer sessions consume more system resources. Although the number of connections in the cluster is balanced, the processing volume may vary greatly, and the number of connections cannot reflect the real application load. Minimum missing method: In the minimum missing method, the management node records the request status of each node for a long time, and sends the next request to the node that has processed the fewest requests in history. Unlike the least join method, the minimum missing records the number of past joins rather than the current number of joins. Fastest response method: The management node records the network response time from itself to each cluster node, and assigns the next incoming connection request to the node with the shortest response time. In most LAN based clusters, the fastest response algorithm does not work very well, most modern systems connected to Ethernet, with partial load, can respond in 1ms or less, which makes this method Pointless.
为此,本申请公开了一种节点均衡调度方案,能够利用这种节点的工作负载的不均衡性,根据负载的动态变化来自动进行节点调度,在各节点间达到一种自适应的平衡,并且避免节点间相互通讯,实现简单、开销更小的节点调度。Therefore, the present application discloses a node balancing scheduling scheme, which can make use of the unbalanced workload of such nodes to automatically perform node scheduling according to the dynamic changes of the load, and achieve an adaptive balance among the nodes. And avoid mutual communication between nodes, to achieve simple node scheduling with less overhead.
参照图1所示,本发明实施例公开了一种节点均衡调度方法,应用于服务器集群,包括:Referring to FIG. 1, an embodiment of the present invention discloses a node balanced scheduling method, which is applied to a server cluster, including:
步骤S11:获取客户端发送的连接请求。Step S11: Obtain the connection request sent by the client.
本实施例中,获取客户端发送的包含与所述服务器集群中的预设网卡绑定的公共互联网协议地址的连接请求。可以理解的是,在连接请求建立的初期,根据各个Servcanzher节点负载状况对TCP连接的第二次握手信号进行一定的延时,使得当前负载最轻的节点总是最先响应客户端的连接请求,其工作原理描述如下:t1时刻,客户端访间集群服务器,给公共IP地址发SYN请求,准备建立连接;如图2所示,当客户端和服务器端通过TCP协议准备进行通讯时,首先客户端和服务端都属于关闭状态,当服务端主动监听某个端口时,客户端生成一个随机的初始化序号,并将所述初始化序号置于TCP首部的对应字段中,同时把SYN标志位置为1,表示SYN报文。然后将该SYN报文发送至服务端,用于向服务端发起连接,需要注意的是,t1时刻,客户端发送的SYN报文中不包含应用层数据,只是一个连接请求。In this embodiment, the connection request sent by the client and including the public internet protocol address bound to the preset network card in the server cluster is obtained. It is understandable that in the initial stage of connection request establishment, the second handshake signal of TCP connection is delayed for a certain period according to the load status of each Servcanzher node, so that the node with the lightest load is always the first to respond to the connection request of the client. Its working principle is described as follows: At t1, the client accesses the cluster server, sends a SYN request to the public IP address, and prepares to establish a connection; as shown in Figure 2, when the client and the server prepare to communicate through the TCP protocol, first the client Both the client and the server are in the closed state. When the server actively monitors a certain port, the client generates a random initialization sequence number, and places the initialization sequence number in the corresponding field of the TCP header, and sets the SYN flag to 1. , indicating a SYN packet. The SYN message is then sent to the server to initiate a connection to the server. It should be noted that at t1, the SYN message sent by the client does not contain application layer data, but is just a connection request.
步骤S12:将所述连接请求发送到各服务器节点,以便各所述服务器节点分别按照自身请求队列中的请求顺序对所述连接请求进行处理,并产生相应的应答信息。Step S12: Send the connection request to each server node, so that each server node processes the connection request according to the request sequence in its own request queue, and generates corresponding response information.
本实施例中,当SYN包到达前端总线后,通过前端总线将所述连接请求发送到各服务器节点的同构网卡,可以理解的是,前端总线以广播的形式发送到各节点的同构网卡,也即将一个SYN包在前端总线上传播,由于前端总线将各个服务器节点连接起来,所以通过前端总线可以将同一个SYN包发送到各个服务器节点,即实现一对多的数据传播方式,当连接请求发送到各服务器节点后,然后各个服务器节点按照负载收集模块给出的自身负载情况,对该请求进行一定的延时,其中,负载越重的服务器节点的延时也越大,由于每个服务器在正常运行时,各个服务器节点由于自身的CPU能力、当前所需要处理的业务数据等自身或外界因素导致了各个节点的同一时刻的CPU资源的占用情况和剩余情况存在不一致的情况,因此当新增一个业务请求时,预先将该业务请求放置到自身服务器节点的自身请求队列中,以便各个服务器节点根据请求顺序依次处理自身请求队列中的业务请求,并针对由前端总线分配的连接请求进行处理并产生相应的应答信息。In this embodiment, after the SYN packet reaches the front-side bus, the connection request is sent to the homogeneous network card of each server node through the front-side bus. It can be understood that the front-side bus is sent to the homogeneous network card of each node in the form of broadcast , that is to say, a SYN packet is propagated on the front-side bus. Since the front-side bus connects each server node, the same SYN packet can be sent to each server node through the front-side bus, that is, a one-to-many data propagation method is realized. After the request is sent to each server node, then each server node delays the request according to its own load situation given by the load collection module. When the server is running normally, due to its own CPU capability, business data currently being processed, etc., or external factors, each server node may have inconsistent CPU resource occupancy and remaining conditions at the same time of each node. When a new service request is added, the service request is placed in the own request queue of its own server node in advance, so that each server node processes the service requests in its own request queue in turn according to the request sequence, and performs the processing for the connection request allocated by the front bus. Process and generate corresponding response information.
本实施例中,基于各所述服务器节点的负载情况以及请求顺序对所述连接请求进行延时处理,并产生相应的应答信息。可以理解的是,参照图2所示,当服务端在收到来自客户端的SYN包后,服务端也随机初始化自己的序号,并将所述序号填入TCP首部的字段中,其次,将确认应答的信息填入TCP首部的关于应答信息的字段中,接着把SYN和ACK标志位置为1。最后把该应答信息的报文发给客户端,也即所述服务器节点发送第二次握手信号,该信号中包含SYN与ACK包,需要注意的是,该应答报文也不包含应用层数据。In this embodiment, delay processing is performed on the connection request based on the load situation of each of the server nodes and the request sequence, and corresponding response information is generated. It can be understood that, referring to Figure 2, when the server receives the SYN packet from the client, the server also randomly initializes its own serial number, and fills the serial number into the field of the TCP header, and then confirms the The response information is filled in the field of the response information in the TCP header, and then the SYN and ACK flags are set to 1. Finally, the message of the response information is sent to the client, that is, the server node sends the second handshake signal, which contains SYN and ACK packets. It should be noted that the response message does not contain application layer data. .
步骤S13:将各所述服务器节点的所述应答信息反馈至所述客户端,以便所述客户端筛选出反馈时间最短的所述服务器节点作为目标节点,并将待处理数据发送至所述目标节点,以便所述目标节点对所述待处理数据进行处理。Step S13: Feed back the response information of each server node to the client, so that the client selects the server node with the shortest feedback time as the target node, and sends the data to be processed to the target node, so that the target node processes the data to be processed.
本实施例中,将各所述服务器节点的所述应答信息反馈至所述客户端,以便所述客户端筛选出反馈时间最短的所述服务器节点作为目标节点,可以理解的是,t2时刻,负载较小的服务器节点的应答信息达到客户端,并将在t2时刻的应答信息到达客户端的服务器节点作为目标节点,进而从所有服务器节点中,确定出执行本次任务需求的目标节点,于是客户端给向公共IP地址发送第三次握手信号,第三次握手信号再次经由前端总线到达目标节点,并将本次握手信号中的待处理数据发送至所述目标节点,以便所述目标节点对所述待处理数据进行处理。In this embodiment, the response information of each server node is fed back to the client, so that the client selects the server node with the shortest feedback time as the target node. It can be understood that at time t2, The response information of the server node with less load reaches the client, and the server node whose response information reaches the client at time t2 is used as the target node, and then from all the server nodes, the target node to perform this task is determined, so the client The terminal sends the third handshake signal to the public IP address, the third handshake signal reaches the target node through the front-side bus again, and sends the data to be processed in this handshake signal to the target node, so that the target node can The data to be processed is processed.
本实施例中,参照图3所示,前端总线作为服务器集群的接收数据包和发送数据包的单一入口,借助于集线器的共享介质特征,使所有访问服务器集群的数据包都能被内部各服务器节点的同构网卡接收。服务器集群内部所有服务器节点都配备两块网卡,上端网卡绑定一个对外的公共IP地址,以实现服务器集群的单一IP映像;下端网卡配备一个内部1P地址,负责和服务器集群管理控制台进行交互和发送数据包到外部网关。控制台节点负责管理和监视各个服务节点的工作状态。可以理解的是,当客户端发送连接请求时,是发送到公共IP地址上的前端总线上,以便前端总线接收所有的数据包,并根据内部IP地址分发给服务器集群中的对应的所有节点,以便所述各服务器节点针对所述连接请求进行反馈相应的应答信息,并将所述应答信息通过前端总线再次返回至客户端,需要注意的是,在服务器集群中还存在一个控制台节点,用于监视和管理各个服务器节点的工作状态,以便及时发现服务器节点的异常情况,并产生相应的异常日志且及时上报上层管理层,以便运维人员根据异常日志对异常服务节点进行处理。In this embodiment, as shown in FIG. 3 , the front-side bus is used as a single entry for receiving data packets and sending data packets of the server cluster. With the help of the shared medium feature of the hub, all data packets accessing the server cluster can be processed by the internal servers. The node's homogeneous network card receives. All server nodes in the server cluster are equipped with two network cards. The upper network card is bound to an external public IP address to realize a single IP image of the server cluster; the lower network card is equipped with an internal 1P address, which is responsible for interacting with the server cluster management console. Send the packet to the external gateway. The console node is responsible for managing and monitoring the working status of each service node. It can be understood that when the client sends a connection request, it is sent to the front-side bus on the public IP address, so that the front-side bus receives all the data packets and distributes them to all the corresponding nodes in the server cluster according to the internal IP address, In order for the server nodes to feed back the corresponding response information for the connection request, and return the response information to the client through the front-end bus, it should be noted that there is also a console node in the server cluster, which uses It is used to monitor and manage the working status of each server node, so as to detect the abnormal situation of the server node in time, and generate the corresponding abnormal log and report it to the upper management layer in time, so that the operation and maintenance personnel can deal with the abnormal service node according to the abnormal log.
可见,本申请公开了一种节点均衡调度方法,应用于服务器集群,包括:获取客户端发送的连接请求;将所述连接请求发送到各服务器节点,以便各所述服务器节点分别按照自身请求队列中的请求顺序对所述连接请求进行处理,并产生相应的应答信息;将各所述服务器节点的所述应答信息反馈至所述客户端,以便所述客户端筛选出反馈时间最短的所述服务器节点作为目标节点,并将待处理数据发送至所述目标节点,以便所述目标节点对所述待处理数据进行处理。由此可见,本申请通过利用各节点的工作负载表现出的不均衡性,当收到客户端发送的连接请求后,根据各个节点的负载情况,在进行应答信息的反馈时会产生一定的时间延迟,其中,负载越大,时间延迟越长,反馈应答信息的时间也越长,因此根据节点负载的动态变化来自动进行实时节点调度,在各节点间达到一种自适应的平衡。而且不需要节点间相互通讯,因此实现简单,开销更小的节点调度。It can be seen that the present application discloses a node balanced scheduling method, which is applied to a server cluster, including: obtaining a connection request sent by a client; sending the connection request to each server node, so that each server node can queue according to its own request. The connection request is processed in the request sequence in the server node, and corresponding response information is generated; The server node acts as a target node, and sends the data to be processed to the target node, so that the target node can process the data to be processed. It can be seen that, by utilizing the unbalanced workload of each node, after receiving the connection request sent by the client, according to the load of each node, a certain time will be generated when the response information is fed back. Delay, among which, the larger the load, the longer the time delay, and the longer the time to feedback the response information, so the real-time node scheduling is automatically performed according to the dynamic change of the node load, and an adaptive balance is achieved among the nodes. And it does not need to communicate with each other between nodes, so the implementation is simple and the node scheduling with less overhead.
参照图4所示,本发明实施例公开了一种具体的节点均衡调度方法,相对于上一实施例,本实施例对技术方案作了进一步的说明和优化。具体的:Referring to FIG. 4 , an embodiment of the present invention discloses a specific node balancing scheduling method. Compared with the previous embodiment, this embodiment further describes and optimizes the technical solution. specific:
步骤S21:获取客户端发送的连接请求。Step S21: Obtain the connection request sent by the client.
步骤S22:将所述连接请求发送到各服务器节点,以便各所述服务器节点分别按照自身请求队列中的请求顺序对所述连接请求进行处理,并产生相应的应答信息。Step S22: Send the connection request to each server node, so that each server node processes the connection request according to the request sequence in its own request queue, and generates corresponding response information.
步骤S23:将各所述服务器节点的所述应答信息反馈至所述客户端,以便所述客户端筛选出反馈时间最短的所述服务器节点作为目标节点。Step S23: Feed back the response information of each server node to the client, so that the client selects the server node with the shortest feedback time as the target node.
其中,步骤S21、S22、S23中更加详细的处理过程请参照前述公开的实施例,在此不再进行赘述。Wherein, for more detailed processing procedures in steps S21, S22, and S23, please refer to the previously disclosed embodiments, which will not be repeated here.
步骤S24:获取所述客户端在筛选到所述目标节点后反馈的通知消息;删除所述各服务器节点中除所述目标节点之外的其余服务器节点反馈的所述应答信息。Step S24: Acquire the notification message fed back by the client after screening the target node; delete the response information fed back by the remaining server nodes except the target node among the server nodes.
本实施例中,当t2时刻的负载较小的节点的应答信息达到客户端,于是客户端给向公共IP地址发送第三次握手信号,t3时刻,负载较大的节点的应答信息才到达客户端,由于t3>t2,客户端已经收到t2时刻的服务器节点A的应答信息,相应的TCP协议栈会自动丢弃t3时刻服务器节点B的应答信息。这样一来,本实施例通过各个服务器节点自身的负载以及处理连接请求的请求时间可以简单筛选出当前适合处理新增任务的服务器节点,使各个节点之间达到一种自适应的平衡,避免了节点之间还需要额外通讯,实现过程简单、方便。In this embodiment, when the response information of the node with less load at time t2 reaches the client, the client sends a third handshake signal to the public IP address, and at time t3, the response information of the node with larger load reaches the client Since t3>t2, the client has received the response information of server node A at time t2, and the corresponding TCP protocol stack will automatically discard the response information of server node B at time t3. In this way, according to the load of each server node and the request time for processing the connection request, the present embodiment can simply filter out the server node that is currently suitable for processing the new task, so as to achieve an adaptive balance among the nodes, avoiding the need for Additional communication is also required between nodes, and the implementation process is simple and convenient.
步骤S25:将待处理数据发送至所述目标节点,以便所述目标节点对所述待处理数据进行处理。Step S25: Send the data to be processed to the target node, so that the target node can process the data to be processed.
本实施例中,所述将各所述服务器节点的所述应答信息反馈至所述客户端,以便所述客户端筛选出反馈时间最短的所述服务器节点作为目标节点,并将待处理数据发送至所述目标节点之后,还包括:通过所述目标节点对所述待处理数据的序列号与所述应答信息中的序列号进行比对,若比对结果一致,则对所述待处理数据进行处理。若比对结果不一致,则向传输控制协议栈发送撤销数据包,撤销与客户端的连接。可以理解的是,当客户端发送的第三次握手信号经由前端总线到达各个Server节点,包过滤模块检查该应答信号的ACK序列号是否与自身的初始序列号一致,如果相等,则允许通过;否则向上层TCP协议栈发送一个RST数据包以撤销该连接。In this embodiment, the response information of each server node is fed back to the client, so that the client selects the server node with the shortest feedback time as the target node, and sends the data to be processed After reaching the target node, it also includes: comparing the serial number of the data to be processed with the serial number in the response information through the target node, and if the comparison results are consistent, comparing the data to be processed to be processed. If the comparison results are inconsistent, a cancellation data packet is sent to the transmission control protocol stack to cancel the connection with the client. It can be understood that when the third handshake signal sent by the client reaches each server node via the front-end bus, the packet filtering module checks whether the ACK sequence number of the response signal is consistent with its own initial sequence number, and if it is equal, it is allowed to pass; Otherwise, send an RST packet to the upper TCP protocol stack to cancel the connection.
由此可见,本实施例通过利用各个服务器节点针对服务端发送的连接请求所产生应答信息到达客户端的时间的长短,选择反馈应答时间最短的服务器节点作为处理当前时刻待处理数据的目标节点,并且在处理待处理数据之前还需要比对序列号是否一致,避免出现错误处理的情况,方便快捷的实现了节点的实时负载的分布式调度。It can be seen that in this embodiment, the server node with the shortest feedback response time is selected as the target node for processing the data to be processed at the current moment by using the length of time for the response information generated by each server node to the connection request sent by the server to reach the client, and Before processing the data to be processed, it is also necessary to compare whether the serial numbers are consistent, so as to avoid the situation of error processing, and realize the distributed scheduling of the real-time load of the nodes conveniently and quickly.
参照图5所示,本发明实施例还公开了一种节点均衡调度装置,应用于服务器集群,包括:Referring to FIG. 5 , an embodiment of the present invention further discloses a node balancing scheduling device, which is applied to a server cluster, including:
请求获取模块11,用于获取客户端发送的连接请求;A request obtaining module 11, for obtaining the connection request sent by the client;
请求处理模块12,用于将所述连接请求发送到各服务器节点,以便各所述服务器节点分别按照自身请求队列中的请求顺序对所述连接请求进行处理,并产生相应的应答信息;The request processing module 12 is configured to send the connection request to each server node, so that each server node processes the connection request according to the request sequence in its own request queue, and generates corresponding response information;
节点调度模块13,用于将各所述服务器节点的所述应答信息反馈至所述客户端,以便所述客户端筛选出反馈时间最短的所述服务器节点作为目标节点,并将待处理数据发送至所述目标节点,以便所述目标节点对所述待处理数据进行处理。The node scheduling module 13 is used to feed back the response information of each server node to the client, so that the client selects the server node with the shortest feedback time as the target node, and sends the data to be processed to the target node, so that the target node can process the data to be processed.
在请求处理模块12中,前端总线作为服务器集群的接收数据包和发送数据包的单一入口,借助于集线器的共享介质特征,使所有访问服务器集群的数据包都能被内部各服务器节点的同构网卡接收。服务器集群内部所有服务器节点都配备两块网卡,上端网卡绑定一个对外的公共IP地址,以实现服务器集群的单一IP映像;下端网卡配备一个内部1P地址,负责和服务器集群管理控制台进行交互和发送数据包到外部网关。控制台节点负责管理和监视各个服务节点的工作状态。可以理解的是,当客户端发送连接请求时,是发送到公共IP地址上的前端总线上,以便前端总线接收所有的数据包,并根据内部IP地址分发给服务器集群中的对应的所有节点,以便所述各服务器节点针对所述连接请求进行反馈相应的应答信息,并将所述应答信息通过前端总线再次返回至客户端,需要注意的是,在服务器集群中还存在一个控制台节点,用于监视和管理各个服务器节点的工作状态,以便及时发现服务器节点的异常情况,并产生相应的异常日志且及时上报上层管理层,以便运维人员根据异常日志对异常服务节点进行处理。In the request processing module 12, the front-side bus is used as a single entry for the server cluster to receive data packets and send data packets. With the help of the shared medium feature of the hub, all the data packets accessing the server cluster can be homogeneous by the internal server nodes. NIC receives. All server nodes in the server cluster are equipped with two network cards. The upper network card is bound to an external public IP address to realize a single IP image of the server cluster; the lower network card is equipped with an internal 1P address, which is responsible for interacting with the server cluster management console. Send the packet to the external gateway. The console node is responsible for managing and monitoring the working status of each service node. It can be understood that when the client sends a connection request, it is sent to the front-side bus on the public IP address, so that the front-side bus receives all the data packets and distributes them to all the corresponding nodes in the server cluster according to the internal IP address, In order for the server nodes to feed back the corresponding response information for the connection request, and return the response information to the client through the front-end bus, it should be noted that there is also a console node in the server cluster, which uses It is used to monitor and manage the working status of each server node, so as to detect the abnormal situation of the server node in time, and generate the corresponding abnormal log and report it to the upper management layer in time, so that the operation and maintenance personnel can deal with the abnormal service node according to the abnormal log.
可见,本申请公开了一种节点均衡调度方法,应用于服务器集群,包括:获取客户端发送的连接请求;将所述连接请求发送到各服务器节点,以便各所述服务器节点分别按照自身请求队列中的请求顺序对所述连接请求进行处理,并产生相应的应答信息;将各所述服务器节点的所述应答信息反馈至所述客户端,以便所述客户端筛选出反馈时间最短的所述服务器节点作为目标节点,并将待处理数据发送至所述目标节点,以便所述目标节点对所述待处理数据进行处理。由此可见,本申请通过利用各节点的工作负载表现出的不均衡性,当收到客户端发送的连接请求后,根据各个节点的负载情况,在进行应答信息的反馈时会产生一定的时间延迟,其中,负载越大,时间延迟越长,反馈应答信息的时间也越长,因此根据节点负载的动态变化来自动进行实时节点调度,在各节点间达到一种自适应的平衡。而且不需要节点间相互通讯,因此实现简单,开销更小的节点调度。It can be seen that the present application discloses a node balanced scheduling method, which is applied to a server cluster, including: obtaining a connection request sent by a client; sending the connection request to each server node, so that each server node can queue according to its own request. The connection request is processed in the request sequence in the server node, and corresponding response information is generated; The server node acts as a target node, and sends the data to be processed to the target node, so that the target node can process the data to be processed. It can be seen that, by utilizing the unbalanced workload of each node, after receiving the connection request sent by the client, according to the load of each node, a certain time will be generated when the response information is fed back. Delay, among which, the larger the load, the longer the time delay, and the longer the time to feedback the response information, so the real-time node scheduling is automatically performed according to the dynamic change of the node load, and an adaptive balance is achieved among the nodes. And it does not need to communicate with each other between nodes, so the implementation is simple and the node scheduling with less overhead.
在一些具体实施例中,所述请求获取模块11,具体可以包括:In some specific embodiments, the request obtaining module 11 may specifically include:
第一请求获取单元,用于获取客户端发送的包含与所述服务器集群中的预设网卡绑定的公共互联网协议地址的连接请求。The first request obtaining unit is configured to obtain a connection request sent by a client and including a public Internet protocol address bound to a preset network card in the server cluster.
在一些具体实施例中,所述请求处理模块12,具体可以包括:In some specific embodiments, the request processing module 12 may specifically include:
请求发送单元,用于通过前端总线将所述连接请求发送到各服务器节点的同构网卡。The request sending unit is used for sending the connection request to the homogeneous network card of each server node through the front side bus.
在一些具体实施例中,所述请求处理模块12,具体可以包括:In some specific embodiments, the request processing module 12 may specifically include:
应答信息获取单元,用于基于各所述服务器节点的负载情况以及请求顺序对所述连接请求进行延时处理,并产生相应的应答信息。The response information acquisition unit is configured to perform delay processing on the connection request based on the load situation of each of the server nodes and the request sequence, and generate corresponding response information.
在一些具体实施例中,所述节点调度模块13,具体可以包括:In some specific embodiments, the node scheduling module 13 may specifically include:
信息删除单元,用于获取所述客户端在筛选到所述目标节点后反馈的通知消息;删除所述各服务器节点中除所述目标节点之外的其余服务器节点反馈的所述应答信息。An information deletion unit, configured to acquire the notification message fed back by the client after screening the target node; and delete the response information fed back by the remaining server nodes except the target node among the server nodes.
在一些具体实施例中,所述节点调度模块13,具体可以包括:In some specific embodiments, the node scheduling module 13 may specifically include:
序列号比对子模块,用于通过所述目标节点对所述待处理数据的序列号与所述应答信息中的序列号进行比对,若比对结果一致,则对所述待处理数据进行处理。The sequence number comparison sub-module is used to compare the sequence number of the data to be processed with the sequence number in the response information through the target node, and if the comparison results are consistent, the data to be processed is compared. deal with.
在一些具体实施例中,所述序列号比对子模块,具体可以包括:In some specific embodiments, the sequence number comparison sub-module may specifically include:
连接撤销单元,用于若比对结果不一致,则向传输控制协议栈发送撤销数据包,撤销与客户端的连接。The connection revocation unit is used for sending revocation data packets to the transmission control protocol stack if the comparison results are inconsistent, and revoking the connection with the client.
进一步的,本申请实施例还公开了一种电子设备,图6是根据一示例性实施例示出的电子设备20结构图,图中的内容不能认为是对本申请的使用范围的任何限制。Further, an embodiment of the present application also discloses an electronic device. FIG. 6 is a structural diagram of an
图6为本申请实施例提供的一种电子设备20的结构示意图。该电子设备20,具体可以包括:至少一个处理器21、至少一个存储器22、电源23、通信接口24、输入输出接口25和通信总线26。其中,所述存储器22用于存储计算机程序,所述计算机程序由所述处理器21加载并执行,以实现前述任一实施例公开的节点均衡调度方法中的相关步骤。另外,本实施例中的电子设备20具体可以为电子计算机。FIG. 6 is a schematic structural diagram of an
本实施例中,电源23用于为电子设备20上的各硬件设备提供工作电压;通信接口24能够为电子设备20创建与外界设备之间的数据传输通道,其所遵循的通信协议是能够适用于本申请技术方案的任意通信协议,在此不对其进行具体限定;输入输出接口25,用于获取外界输入数据或向外界输出数据,其具体的接口类型可以根据具体应用需要进行选取,在此不进行具体限定。In this embodiment, the power supply 23 is used to provide working voltage for each hardware device on the
其中,处理器21可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器21可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器21也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central ProcessingUnit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器21可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器21还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。The processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 21 may use at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish. The processor 21 may also include a main processor and a coprocessor. The main processor is a processor used to process data in the wake-up state, also called a CPU (Central Processing Unit, central processing unit); A low-power processor for processing data in a standby state. In some embodiments, the processor 21 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen. In some embodiments, the processor 21 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
另外,存储器22作为资源存储的载体,可以是只读存储器、随机存储器、磁盘或者光盘等,其上所存储的资源可以包括操作系统221、计算机程序222等,存储方式可以是短暂存储或者永久存储。In addition, as a carrier for resource storage, the memory 22 can be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc. The resources stored on it can include an operating system 221, a computer program 222, etc., and the storage method can be short-term storage or permanent storage. .
其中,操作系统221用于管理与控制电子设备20上的各硬件设备以及计算机程序222,以实现处理器21对存储器22中海量数据223的运算与处理,其可以是Windows Server、Netware、Unix、Linux等。计算机程序222除了包括能够用于完成前述任一实施例公开的由电子设备20执行的节点均衡调度方法的计算机程序之外,还可以进一步包括能够用于完成其他特定工作的计算机程序。数据223除了可以包括电子设备接收到的由外部设备传输进来的数据,也可以包括由自身输入输出接口25采集到的数据等。The operating system 221 is used to manage and control each hardware device and computer program 222 on the
进一步的,本申请还公开了一种计算机可读存储介质,用于存储计算机程序;其中,所述计算机程序被处理器执行时实现前述公开的节点均衡调度方法。关于该方法的具体步骤可以参考前述实施例中公开的相应内容,在此不再进行赘述。Further, the present application also discloses a computer-readable storage medium for storing a computer program; wherein, when the computer program is executed by a processor, the aforementioned method for balanced scheduling of nodes is implemented. For the specific steps of the method, reference may be made to the corresponding content disclosed in the foregoing embodiments, which will not be repeated here.
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。The various embodiments in this specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments, and the same or similar parts between the various embodiments may be referred to each other. As for the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant part can be referred to the description of the method.
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。Professionals may further realize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, computer software, or a combination of the two, in order to clearly illustrate the possibilities of hardware and software. Interchangeability, the above description has generally described the components and steps of each example in terms of functionality. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application. The steps of a method or algorithm described in conjunction with the embodiments disclosed herein may be directly implemented in hardware, a software module executed by a processor, or a combination of the two. A software module can be placed in random access memory (RAM), internal memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other in the technical field. in any other known form of storage medium.
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。Finally, it should also be noted that in this document, relational terms such as first and second are used only to distinguish one entity or operation from another, and do not necessarily require or imply these entities or that there is any such actual relationship or sequence between operations. Moreover, the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device that includes a list of elements includes not only those elements, but also includes not explicitly listed or other elements inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.
以上对本发明所提供的一种节点均衡调度方法、装置、设备、存储介质进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。A node balance scheduling method, device, device, and storage medium provided by the present invention are described above in detail, and specific examples are used in this paper to illustrate the principles and implementations of the present invention. The descriptions of the above embodiments are only used for Help to understand the method of the present invention and its core idea; at the same time, for those skilled in the art, according to the idea of the present invention, there will be changes in the specific implementation and application scope. In summary, the content of this specification It should not be construed as a limitation of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210612292.9A CN115002041A (en) | 2022-05-31 | 2022-05-31 | Node balance scheduling method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210612292.9A CN115002041A (en) | 2022-05-31 | 2022-05-31 | Node balance scheduling method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115002041A true CN115002041A (en) | 2022-09-02 |
Family
ID=83031357
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210612292.9A Pending CN115002041A (en) | 2022-05-31 | 2022-05-31 | Node balance scheduling method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115002041A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1307287A (en) * | 2000-01-28 | 2001-08-08 | 国际商业机器公司 | Method and device for balancing load of image server |
US20030055979A1 (en) * | 2001-09-19 | 2003-03-20 | Cooley William Ray | Internet domain name resolver |
CN1410905A (en) * | 2002-11-14 | 2003-04-16 | 华中科技大学 | Full distribution type aggregation network servicer system |
CN1937534A (en) * | 2006-09-20 | 2007-03-28 | 杭州华为三康技术有限公司 | Load balance realizing method and load balance device |
CN101075924A (en) * | 2006-09-21 | 2007-11-21 | 腾讯科技(深圳)有限公司 | Method for accessing server by customer end |
CN102340554A (en) * | 2011-09-29 | 2012-02-01 | 奇智软件(北京)有限公司 | Method and device for selecting optimal application server of domain name system DNS |
CN202261349U (en) * | 2011-10-26 | 2012-05-30 | 江苏省现代企业信息化应用支撑软件工程技术研发中心 | Active cluster network server framework for safely transmitting data and monitoring flow |
CN110933136A (en) * | 2019-10-31 | 2020-03-27 | 北京浪潮数据技术有限公司 | Service node selection method, device, equipment and readable storage medium |
-
2022
- 2022-05-31 CN CN202210612292.9A patent/CN115002041A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1307287A (en) * | 2000-01-28 | 2001-08-08 | 国际商业机器公司 | Method and device for balancing load of image server |
US20030055979A1 (en) * | 2001-09-19 | 2003-03-20 | Cooley William Ray | Internet domain name resolver |
CN1410905A (en) * | 2002-11-14 | 2003-04-16 | 华中科技大学 | Full distribution type aggregation network servicer system |
CN1937534A (en) * | 2006-09-20 | 2007-03-28 | 杭州华为三康技术有限公司 | Load balance realizing method and load balance device |
CN101075924A (en) * | 2006-09-21 | 2007-11-21 | 腾讯科技(深圳)有限公司 | Method for accessing server by customer end |
CN102340554A (en) * | 2011-09-29 | 2012-02-01 | 奇智软件(北京)有限公司 | Method and device for selecting optimal application server of domain name system DNS |
CN202261349U (en) * | 2011-10-26 | 2012-05-30 | 江苏省现代企业信息化应用支撑软件工程技术研发中心 | Active cluster network server framework for safely transmitting data and monitoring flow |
CN110933136A (en) * | 2019-10-31 | 2020-03-27 | 北京浪潮数据技术有限公司 | Service node selection method, device, equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109274707B (en) | Load scheduling method and device | |
US10880400B2 (en) | Programming a data network device using user defined scripts | |
US11582163B2 (en) | System for early system resource constraint detection and recovery | |
US10129122B2 (en) | User defined objects for network devices | |
US11463511B2 (en) | Model-based load balancing for network data plane | |
CN103503424B (en) | System and method for implementing connection mirroring in a multi-core system | |
US20100274893A1 (en) | Methods and apparatus for detecting and limiting focused server overload in a network | |
EP2692095B1 (en) | Method, apparatus and computer program product for updating load balancer configuration data | |
US10649822B2 (en) | Event ingestion management | |
KR20050043616A (en) | Load balancing of servers in a cluster | |
US10432530B2 (en) | System and method of providing compression technique for jitter sensitive application through multiple network links | |
CN108234208A (en) | The visualization load balancing dispositions method and system of resource management based on business | |
WO2023207189A1 (en) | Load balancing method and system, computer storage medium, and electronic device | |
CN112910793A (en) | Method for connection multiplexing in seven-layer load balancing and load balancer | |
US10749904B2 (en) | Programming a data network device using user defined scripts with licenses | |
Chen et al. | Traffic-aware load balancing for M2M networks using SDN | |
Peterson et al. | Using PlanetLab for Network Research: Myths, Realities, and Best Practices. | |
Gasmelseed et al. | Traffic pattern–based load‐balancing algorithm in software‐defined network using distributed controllers | |
CN108989420A (en) | The method and system of registration service, the method and system for calling service | |
US11563632B2 (en) | User defined objects for network devices | |
CN115002041A (en) | Node balance scheduling method, device, equipment and storage medium | |
Ivanisenko | Methods and Algorithms of load balancing | |
CN111641698B (en) | Data statistical method, system, equipment and storage medium | |
JP2010238101A (en) | Load distribution device, load distribution method, load distribution program, and load distribution system | |
Yang et al. | Random early detection web servers for dynamic load balancing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |