[go: up one dir, main page]

CN100435530C - A Realization Method of Two-way Load Balancing Mechanism in Multi-machine Server System - Google Patents

A Realization Method of Two-way Load Balancing Mechanism in Multi-machine Server System Download PDF

Info

Publication number
CN100435530C
CN100435530C CNB2006100427623A CN200610042762A CN100435530C CN 100435530 C CN100435530 C CN 100435530C CN B2006100427623 A CNB2006100427623 A CN B2006100427623A CN 200610042762 A CN200610042762 A CN 200610042762A CN 100435530 C CN100435530 C CN 100435530C
Authority
CN
China
Prior art keywords
node
load
end server
load equalizer
server node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006100427623A
Other languages
Chinese (zh)
Other versions
CN1859313A (en
Inventor
伍卫国
董小社
付重钦
钱德沛
王恩东
胡雷钧
王守昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Beijing Electronic Information Industry Co Ltd
Xian Jiaotong University
Original Assignee
Inspur Beijing Electronic Information Industry Co Ltd
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Beijing Electronic Information Industry Co Ltd, Xian Jiaotong University filed Critical Inspur Beijing Electronic Information Industry Co Ltd
Priority to CNB2006100427623A priority Critical patent/CN100435530C/en
Publication of CN1859313A publication Critical patent/CN1859313A/en
Application granted granted Critical
Publication of CN100435530C publication Critical patent/CN100435530C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

The present invention relates to a method for realizing a two-way load equalizing mechanism in a multi-machine server system. A load equalizing system composed of one or more load equalizer nodes is connected with an outer network, request and return data packets pass through the load equalizing system, server nodes in the system are externally shielded, and a server system has high safety. Simultaneously, the load equalizer nodes are used for equalizing the load of the request from a client terminal, and the objective MAC addresses of the data packets are modified to distribute the data packets to improve performance when the load of the request is equalized; when returned data packets are uniformly distributed to the load equalizer nodes when pass through the load equalizing system, and the whole server system has the function of bidirectional load equalization; when the load equalizer nodes have faults, the request data packets and the return data packets can be transferred to other load equalizer nodes to realize high usability.

Description

一种多机服务器系统中双向负载均衡机制的实现方法 A Realization Method of Two-way Load Balancing Mechanism in Multi-machine Server System

技术领域 technical field

本发明涉及计算机技术领域,具体的说就是提供了一种用于多机服务器系统中的双向负载均衡机制。The invention relates to the field of computer technology, and specifically provides a bidirectional load balancing mechanism used in a multi-machine server system.

技术背景technical background

由于互联网络的飞速发展,快速增长的各种应用使网络服务器的访问量大大增加,这种情况导致了多机服务器系统(比如机群系统,也称为集群系统)的出现以满足日益增长的各种需求。负载均衡技术是多机服务器系统中的关键技术。其主要作用就是对负载进行均衡处理来使整个多机服务器系统达到最佳性能。目前基于多台负载均衡器所组成的负载均衡系统是其中用的比较多的一种解决方案,但是目前的负载均衡通常意义上指的是对客户端请求数据(也称上行数据)包的负载均衡。针对从服务器节点返回的响应数据(也称下行数据)包,有的系统是直接通过服务器节点将其返回给客户端,此时不需要考虑负载均衡,在这种情况下,后端服务器系统(一般由服务器节点通过网络互连而组成)暴露给外网,系统的安全性不好;有的系统采用网络地址转换(NAT,Network Address Translation)机制(策略),虽然请求数据包和返回数据包都经过负载均衡系统,保证了安全性,但是也有两个缺点,一是网络地址转换使负载均衡系统的开销比较大,影响负载均衡系统性能;二是没有对返回数据包进行负载均衡。Due to the rapid development of the Internet, the rapid growth of various applications has greatly increased the number of visits to network servers, which has led to the emergence of multi-machine server systems (such as cluster systems, also known as cluster systems) to meet the growing variety of applications. kind of demand. Load balancing technology is the key technology in multi-machine server system. Its main function is to balance the load so that the entire multi-machine server system can achieve the best performance. At present, the load balancing system based on multiple load balancers is one of the most used solutions, but the current load balancing generally refers to the load on the client request data (also called uplink data) packets. balanced. For the response data (also called downlink data) package returned from the server node, some systems directly return it to the client through the server node. At this time, there is no need to consider load balancing. In this case, the back-end server system ( Generally composed of server nodes connected through the network) exposed to the external network, the security of the system is not good; some systems use the Network Address Translation (NAT, Network Address Translation) mechanism (strategy), although the request data packet and the return data packet All go through the load balancing system to ensure security, but there are also two shortcomings. One is that the network address translation makes the load balancing system more expensive, which affects the performance of the load balancing system; the other is that the returned data packets are not load balanced.

发明内容 Contents of the invention

本发明的目的在于克服上述现有技术不足,提供一种多机服务器系统中双向负载均衡机制的实现方法,具有高可用功能,并能够提高服务器系统的性能和安全性。The purpose of the present invention is to overcome the above-mentioned deficiencies in the prior art, and provide a method for realizing a bidirectional load balancing mechanism in a multi-machine server system, which has a high availability function and can improve the performance and security of the server system.

本发明的技术方案是这样实现的:按如下步骤进行:Technical scheme of the present invention is realized like this: carry out as follows:

1)在由很多台计算机所构成的多机服务器系统中,用一台或多台负载均衡器节点组成负载均衡系统,每台均衡器节点都有两个以太网端口,一个和外部网络相连,负责接收客户端的请求数据包;另一个和内部网络相连,负责和后端服务器系统通信;1) In a multi-machine server system composed of many computers, use one or more load balancer nodes to form a load balancing system, each balancer node has two Ethernet ports, one is connected to the external network, Responsible for receiving the client's request packet; the other is connected to the internal network and is responsible for communicating with the back-end server system;

2)当有客户端请求数据包到来的时候,负载均衡系统中的负载均衡器节点根据后端服务器节点的负载和存活状况来选定请求数据包的发送目的地,将数据包的目的MAC地址修改为选定的后端服务器节点的MAC地址,然后将该数据包分发给该后端服务器节点;2) When a client request data packet arrives, the load balancer node in the load balancing system selects the sending destination of the request data packet according to the load and survival status of the backend server node, and assigns the destination MAC address of the data packet to Modify the MAC address of the selected back-end server node, and then distribute the data packet to the back-end server node;

3)负载均衡器节点和后端服务器节点至少有一个网卡是在同一网段内,以保证通过修改数据包的目的MAC地址就能够使数据包到达目的后端服务器节点;3) At least one network card of the load balancer node and the back-end server node is in the same network segment, so as to ensure that the data packet can reach the destination back-end server node by modifying the destination MAC address of the data packet;

4)把所有负载均衡器节点的对内IP地址和后端服务器节点的IP地址信息写在配置文件中,保存在负载均衡系统的控制台中;4) Write the internal IP addresses of all load balancer nodes and the IP address information of the backend server nodes in the configuration file, and save them in the console of the load balancing system;

5)在负载均衡系统的控制台上,管理员分别对负载均衡器节点和后端服务器节点进行从0开始的连续整数编号,用这些编号对负载均衡器节点总数进行取模,如果某台后端服务器节点的编号为j,j对负载均衡器节点进行取模,取模结果为i,则将第i台负载均衡器节点的对内IP地址作为这台后端服务器节点的默认网关地址,通过这种方法将负载均衡器节点的对内IP地址均衡的作为各台后端服务器节点的默认网关地址;5) On the console of the load balancing system, the administrator numbers the load balancer nodes and the backend server nodes with consecutive integer numbers starting from 0, and uses these numbers to take the modulus of the total number of load balancer nodes. The number of the end server node is j, and j takes the modulus of the load balancer node, and the modulus result is i, then the internal IP address of the i-th load balancer node is used as the default gateway address of the back-end server node, In this way, the internal IP address of the load balancer node is balanced as the default gateway address of each back-end server node;

6)后端服务器节点处理完客户端来的请求数据包后,把返回数据转发给自己对应的默认网关,也就是对应的负载均衡器节点;6) After the backend server node processes the request packet from the client, it forwards the returned data to its corresponding default gateway, which is the corresponding load balancer node;

7)当负载均衡器节点是基于Linux操作系统的主机实现时,负载均衡器节点要修改Linux操作系统内核,允许Linux操作系统内核接收源IP与自身IP相同且是从外面来的数据包,并在Linux内核中打开自身的转发功能,将后端服务器节点发过来的返回数据包直接转发到外网;7) When the load balancer node is implemented based on the host of the Linux operating system, the load balancer node will modify the Linux operating system kernel to allow the Linux operating system kernel to receive packets whose source IP is the same as its own IP and comes from the outside, and Open its own forwarding function in the Linux kernel, and directly forward the return data packet sent by the back-end server node to the external network;

8)当有负载均衡器节点增加或删除的时候,负载均衡系统的控制台上的驻守程序会自动修改保存的负载均衡器节点的对内IP地址信息表,然后进行重新取模,对后端服务器节点和负载均衡器节点进行重新划分,把正常工作的负载均衡器节点的对内IP重新分给各后端台服务器节点作为他们的默认网关地址,以保证均衡器系统的高可用性和负载均衡;8) When a load balancer node is added or deleted, the resident program on the console of the load balancer system will automatically modify the saved internal IP address information table of the load balancer node, and then re-take the model to check the backend The server nodes and load balancer nodes are re-divided, and the internal IP of the normal working load balancer node is re-distributed to each back-end server node as their default gateway address to ensure the high availability and load balancing of the balancer system ;

9)当有后端服务器节点增加或删除的时候,管理员要在负载均衡系统的控制台上重新对后端服务器节点进行从0开始连续的整数编号,用新编号对负载均衡器节点总数进行取模,重新配置后端服务器节点的默认网关。9) When a back-end server node is added or deleted, the administrator needs to re-number the back-end server nodes with consecutive integer numbers starting from 0 on the console of the load balancing system, and use the new number to count the total number of load balancer nodes. Take the modulus and reconfigure the default gateway of the backend server node.

通过采用以上方法,本发明具有以下技术效果:By adopting the above method, the present invention has the following technical effects:

1、双向负载均衡1. Two-way load balancing

本系统既可以对请求数据包进行负载均衡,当返回数据包经过负载均衡系统返回时,又可以对返回数据包进行负载均衡,从而可以更好的提高服务器系统的负载均衡效果。This system can not only perform load balancing on the request data packets, but also perform load balancing on the returned data packets when the returned data packets are returned through the load balancing system, so as to better improve the load balancing effect of the server system.

2、高性能2. High performance

在进行正向负载均衡时,是通过修改数据包的目的MAC地址来把数据包分发给后端服务器节点的;在进行反向负载均衡时,是通过设置服务器节点网关地址的方式,其实质也是通过修改数据包的目的MAC地址来实现。与NAT机制相比较,不需要对数据包进行网络地址转换,负载均衡系统的开销比较小,从而提高了服务器系统的性能。When performing forward load balancing, the data packets are distributed to the back-end server nodes by modifying the destination MAC address of the data packets; when performing reverse load balancing, it is by setting the gateway address of the server nodes, the essence of which is also This is achieved by modifying the destination MAC address of the data packet. Compared with the NAT mechanism, there is no need to perform network address translation on the data packets, and the overhead of the load balancing system is relatively small, thereby improving the performance of the server system.

3、高可用性3. High availability

当有负载均衡器节点失效时,可以通过动态的修改后端服务器节点的网关地址为正常工作的负载均衡器节点的对内IP地址来把任务进行迁移,从而实现了高可用功能。When a load balancer node fails, the task can be migrated by dynamically modifying the gateway address of the backend server node to the internal IP address of the normal working load balancer node, thereby achieving high availability.

4、安全性4. Security

请求和返回数据包都经过负载均衡系统,整个多机服务器系统内部服务器节点对外网是屏蔽的,与DR(Direct Routing)机制(策略)相比较,可以更有效地保证整个多机服务器系统的安全性。Both request and return data packets pass through the load balancing system, and the internal server nodes of the entire multi-machine server system are shielded from the external network. Compared with the DR (Direct Routing) mechanism (strategy), it can more effectively ensure the security of the entire multi-machine server system sex.

5、可扩展性5. Scalability

整个服务器系统可以根据需要动态的添加或删除负载均衡器节点的数量以达到最好的性价比。The entire server system can dynamically add or delete the number of load balancer nodes as needed to achieve the best cost performance.

附图说明 Description of drawings

图1为本发明进行正向负载均衡时的工作原理示意图。FIG. 1 is a schematic diagram of the working principle of the present invention when performing forward load balancing.

图2为本发明进行反向负载均衡时的工作原理示意图。FIG. 2 is a schematic diagram of the working principle of the present invention when performing reverse load balancing.

图3为本发明进行负载均衡的时候,数据包中的地址、端口号的转换图。FIG. 3 is a conversion diagram of addresses and port numbers in data packets when the present invention performs load balancing.

图4为对所有的服务器节点进行网关划分的拓扑图。FIG. 4 is a topology diagram of gateway division for all server nodes.

图5为对服务池分别进行网关划分的拓扑图。FIG. 5 is a topology diagram for dividing service pools into gateways.

附图是本发明的具体实施用例。Accompanying drawing is the specific implementation example of the present invention.

下面结合附图对本发明的内容作进一步详细说明。The content of the present invention will be described in further detail below in conjunction with the accompanying drawings.

具体实施方式 Detailed ways

参照图1所示,负载均衡系统由多台均衡器节点组成,每台均衡器节点都有两个以太网端口,一个和外部网络相连,负责接收客户端的请求数据包;一个和内网相连,负责和后端服务器系统通信。图中的虚直线以上的虚线椭圆框是进行正向负载均衡的地方。从图1中可以看出在进行正向负载均衡的时候,均衡器节点i(i∈0~n)是通过修改数据包的目的MAC地址来实现数据包的分发的。从客户端来的请求数据包在经过负载均衡系统时,负载均衡器节点i(i∈0~n)根据预先设定好的均衡算法决定该请求应该被发至哪台服务器节点j(j∈0~m)来处理,然后修改其目的MAC地址为选定的服务器节点j(j∈0~m)的MAC地址并将其转发出去。Referring to Figure 1, the load balancing system is composed of multiple balancer nodes, and each balancer node has two Ethernet ports, one is connected to the external network and is responsible for receiving client request packets; one is connected to the internal network, Responsible for communicating with the backend server system. The dotted oval box above the dotted line in the figure is where forward load balancing is performed. It can be seen from Fig. 1 that when performing forward load balancing, the balancer node i (i∈0~n) realizes the distribution of data packets by modifying the destination MAC address of the data packets. When the request packet from the client passes through the load balancing system, the load balancer node i (i∈0~n) decides which server node j (j∈ 0~m) to process, and then modify its destination MAC address to the MAC address of the selected server node j (j∈0~m) and forward it.

参照图2所示,图中的虚直线以下的虚线椭圆框就是进行反向负载均衡的地方。在进行反向负载均衡的时候,是通过设置服务器节点j(j∈0~m)的网关地址为均衡器节点i(i∈0~n)的对内IP地址来实现的。返回数据包从服务器节点j(j∈0~m)输出时被转发到该服务器节点j(j∈0~m)的网关地址所对应的均衡器节点i(i∈0~n)上,然后被均衡器节点i(i∈0~n)转发给客户端。Referring to Figure 2, the dotted ellipse box below the dotted line in the figure is the place where reverse load balancing is performed. When performing reverse load balancing, it is realized by setting the gateway address of the server node j (j∈0~m) as the internal IP address of the balancer node i (i∈0~n). When the return data packet is output from the server node j (j∈0~m), it is forwarded to the equalizer node i (i∈0~n) corresponding to the gateway address of the server node j (j∈0~m), and then It is forwarded to the client by the equalizer node i (i∈0~n).

参照图3所示,Cip指的是客户端IP,Vip指的是负载均衡系统统一对外部网络提供的IP,一般称为单一IP或虚拟IP;Cport指的是客户端的(网络)端口号,Vport指的是目的端口号;Vmac指的是负载均衡系统统一对外提供的虚拟MAC地址,Rmac指的是选定的后端服务器节点的MAC地址,Gmac指的是后端服务器节点的网关的MAC地址,也就是对应的均衡器节点的对内网卡的MAC地址。从图3中可以看出,负载均衡系统中的均衡器节点收到客户端的请求数据包,在进行正向负载均衡的时候,数据包的目的MAC地址被修改为选定的服务器节点的MAC地址;服务器节点的返回数据包,在进行反向负载均衡的时候,数据包的目的MAC地址被修改为服务器的网关地址也就是对应的均衡器节点的MAC地址。Referring to Figure 3, Cip refers to the client IP, and Vip refers to the IP provided by the load balancing system to the external network, which is generally called a single IP or virtual IP; Cport refers to the (network) port number of the client, Vport refers to the destination port number; Vmac refers to the virtual MAC address provided by the load balancing system, Rmac refers to the MAC address of the selected backend server node, and Gmac refers to the MAC address of the gateway of the backend server node The address is the MAC address of the internal network card of the corresponding balancer node. It can be seen from Figure 3 that the balancer node in the load balancing system receives the request data packet from the client, and when performing forward load balancing, the destination MAC address of the data packet is changed to the MAC address of the selected server node ; The return data packet of the server node, when reverse load balancing is performed, the destination MAC address of the data packet is modified to the gateway address of the server, which is the MAC address of the corresponding balancer node.

参照图4所示,把服务器节点放在一起统一对均衡器节点台数进行取模,以此来均衡的划分服务器节点网关。图中假设有n台均衡器节点,编号为0-n-1;有(k+1)n台服务器节点,通过取模进行划分后的结果如图所示:编号为0、n……kn的服务器节点划分给了编号为0的均衡器节点;编号为1、n+1……kn+1的服务器节点划分给了编号为1的均衡器节点;编号为n-1、2n-1……kn+n-1的服务器节点划分给了编号为n-1的均衡器节点。划分的目的是使每台均衡器承担相对均等的任务。Referring to Figure 4, the server nodes are put together to uniformly model the number of equalizer nodes, so as to divide the server node gateways in a balanced manner. In the figure, it is assumed that there are n equalizer nodes, numbered 0-n-1; there are (k+1)n server nodes, and the results after division by modulus are shown in the figure: the numbers are 0, n...kn The server nodes are assigned to the equalizer node numbered 0; the server nodes numbered 1, n+1...kn+1 are assigned to the equalizer node numbered 1; the numbered n-1, 2n-1... ...kn+n-1 server nodes are assigned to equalizer nodes numbered n-1. The purpose of the division is to make each equalizer undertake relatively equal tasks.

参照图5所示,后端服务器系统是由很多个提供不同服务的服务池所组成。这时需要对服务池分别进行划分,就是把每个服务池分别按照附图4所示的方法来进行划分,为避免服务池中服务器节点少时出现均衡器负载不均衡情况,采用服务池和服务池内服务器节点两级划分策略。Referring to Fig. 5, the backend server system is composed of many service pools providing different services. At this time, the service pools need to be divided separately, that is, each service pool is divided according to the method shown in Figure 4. In order to avoid the load imbalance of the balancer when there are few server nodes in the service pool, the service pool and service Two-level division strategy for server nodes in the pool.

本发明提供的双向负载均衡机制包括正向负载均衡和反向负载均衡,正向负载均衡指的是对上行请求数据包(客户端到服务器端的请求数据包)进行负载均衡,通过负载均衡系统把请求按预先设定的策略均衡地分发给各台服务器节点;反向负载均衡指的是把从服务器节点处理后返回的数据包均衡的分发到各台负载均衡器节点上,然后经负载均衡器节点返回给客户端。本方法用于要求请求和返回数据包都要经过负载均衡系统的情况。The two-way load balancing mechanism provided in the present invention includes forward load balancing and reverse load balancing. Requests are evenly distributed to each server node according to a preset strategy; reverse load balancing refers to the balanced distribution of data packets returned from server nodes to each load balancer node, and then through the load balancer The node is returned to the client. This method is used when request and return data packets are required to pass through the load balancing system.

负载均衡系统中,当有负载均衡器节点失效的时候,会对整个多机系统产生影响,本发明提供的通过动态修改后端服务器节点的网关地址使其重新定向到其它任何正常工作的负载均衡器节点的方法可以很好的解决这个问题,在保证高可用的同时,还把失效节点的负载平均分摊到其它负载均衡节点上进行处理,避免了传统的由备份节点完全接管失效节点的任务而导致负载全部加在备份节点上的情况。In the load balancing system, when a load balancer node fails, it will have an impact on the entire multi-machine system. The gateway address of the backend server node provided by the invention is redirected to any other normal working load balancing system. The server node method can solve this problem very well. While ensuring high availability, it also distributes the load of the failed node to other load balancing nodes for processing, avoiding the traditional task of completely taking over the failed node by the backup node. The situation that the load is all added to the backup node.

本发明中提供的数据包传输方式是:在进行正向负载均衡的时候,负载均衡器节点修改数据包的目的MAC地址;在进行反向负载均衡的时候,则是采用服务器节点设置网关的方式。修改数据包的目的MAC地址系统开销很小;负载均衡器节点作为后端服务器节点的网关,在数据包经过时只是做简单的判断就把数据包转发出去,系统开销同样很小。这样,可以使整个多机服务器系统具有较高的访问性能。The data packet transmission method provided in the present invention is: when performing forward load balancing, the load balancer node modifies the destination MAC address of the data packet; when performing reverse load balancing, the server node is used to set the gateway . The system overhead of modifying the destination MAC address of the data packet is very small; the load balancer node, as the gateway of the back-end server node, just makes a simple judgment and forwards the data packet when the data packet passes, and the system overhead is also very small. In this way, the entire multi-machine server system can have high access performance.

本发明的双向负载均衡机制支持可扩展性。可扩展性是指通过增加资源以满足不断增长的对性能和功能的要求,或者是通过缩减资源,以降低成本。系统的总服务能力应该随着资源的增加而按照比例增加。理想的情况是,增长的速度是线性的。成本的增加应该小于N(N指重复资源的个数)或者NlogN的一个线性系数。本发明的负载均衡机制中,负载均衡器节点的数量可以随需求进行动态的增加和删除,系统的性能与负载均衡器节点的数量成近似线性关系。The two-way load balancing mechanism of the present invention supports scalability. Scalability refers to reducing costs by adding resources to meet growing demands for performance and functionality, or by shrinking resources. The total service capacity of the system should increase proportionally with the increase of resources. Ideally, the rate of growth is linear. The cost increase should be less than N (N refers to the number of repeated resources) or a linear coefficient of NlogN. In the load balancing mechanism of the present invention, the number of load balancer nodes can be dynamically increased and deleted as required, and the performance of the system has an approximately linear relationship with the number of load balancer nodes.

本发明是通过以下方式实现的:从客户端来的请求数据包,在进行正向负载均衡的时候,负载均衡器节点通过修改数据的目的MAC地址直接将其转发给后端服务器节点,不用经过网络地址转换;为了使返回给客户端的数据包能够经过负载均衡器节点转发出去,在多机系统的控制台上保存有负载均衡器节点和后端服务器节点的信息列表,按照管理员设定好的负载均衡算法,控制台驻守的划分程序对服务器节点进行网关划分,使服务器节点尽量均衡的对应到各负载均衡器节点上,然后根据计算好的对应关系来设置后端服务器节点的默认网关,这样,从后端服务器节点返回的数据包就被转发到其默认网关上,也就是对应的负载均衡器节点上了。与此同时,控制台管理监控程序同时监控各台负载均衡器节点和后端服务器节点的状态,以此来动态的修改服务器节点和负载均衡器节点间的对应关系,保证其高可用性。The present invention is realized in the following way: the request data packet from the client, when performing forward load balancing, the load balancer node directly forwards it to the back-end server node by modifying the destination MAC address of the data, without going through Network address translation; in order to enable the data packets returned to the client to be forwarded through the load balancer node, the information list of the load balancer node and the back-end server node is saved on the console of the multi-machine system, which is set according to the administrator The load balancing algorithm, the division program resident in the console divides the server nodes into gateways, so that the server nodes correspond to the load balancer nodes as balanced as possible, and then set the default gateway of the back-end server nodes according to the calculated correspondence. In this way, the data packet returned from the backend server node is forwarded to its default gateway, which is the corresponding load balancer node. At the same time, the console management monitoring program simultaneously monitors the status of each load balancer node and back-end server node, so as to dynamically modify the corresponding relationship between the server node and the load balancer node to ensure its high availability.

由于在进行正向负载均衡时是通过直接修改数据包的目的MAC地址来实现的,在进行反向负载均衡时是把负载均衡器节点作为后端服务器节点的默认网关的,所以要求负载均衡器节点和服务器节点至少有一个端口是在同一网段内。这样才能保证访问服务器系统的数据包能够经过负载均衡器节点到达服务器节点,并且确保返回的数据包能够经过负载均衡器节点转发出去。Since the forward load balancing is achieved by directly modifying the destination MAC address of the data packet, and the load balancer node is used as the default gateway of the backend server node during the reverse load balancing, so the load balancer is required At least one port of the node and the server node is in the same network segment. Only in this way can we ensure that the data packets accessing the server system can reach the server node through the load balancer node, and ensure that the returned data packets can be forwarded through the load balancer node.

在进行正向负载均衡的时候,为了把访问数据包均衡的分发给后端服务器节点,可以采用多种负载均衡算法,比如轮询法,最小连接数法等。When performing forward load balancing, in order to distribute the access data packets to the back-end server nodes in a balanced manner, various load balancing algorithms can be used, such as round-robin method, minimum number of connections method, etc.

在进行反向负载均衡的时候,为了把返回数据包均衡的分发给各台负载均衡器节点,需要划分好后端服务器节点和负载均衡器节点的对应关系,具体的划分方法如下:When performing reverse load balancing, in order to distribute the returned data packets to each load balancer node in a balanced manner, it is necessary to divide the corresponding relationship between the backend server node and the load balancer node. The specific division method is as follows:

假设负载均衡器节点有n台,后端服务器节点有m台(m>n),负载均衡器节点的编号为0、1……n-1,后端服务器节点的编号为0、1……m-1,则划发算法可以通过下面方法来实现:将后端服务器节点的编号对n取模,如果模值为i(0=<i<=n-1),则将第i台负载均衡器节点的对内IP地址设置为该服务器节点的默认网关地址。Suppose there are n load balancer nodes and m backend server nodes (m>n), the load balancer nodes are numbered 0, 1...n-1, and the backend server nodes are numbered 0, 1... m-1, the distribution algorithm can be realized by the following method: Take the number of the back-end server node modulo n, if the modulus value is i (0=<i<=n-1), then load the i-th The internal IP address of the balancer node is set as the default gateway address of the server node.

当有负载均衡器节点失效时,为了保证高可用性,需要重新设定后端服务器节点的网关,重新设定后仍然要保证负载均衡起作用。重新划分方法如下:When a load balancer node fails, in order to ensure high availability, it is necessary to reset the gateway of the back-end server node. After resetting, it is still necessary to ensure that load balancing works. The redistricting method is as follows:

当编号为i的负载均衡器节点失效时,需要将该负载均衡器节点从系统中剔除,现在负载均衡器节点的总数是n-1,对剩下的负载均衡器节点重新从0开始进行连续的编号,然后重新将后端服务器节点的编号对n-1取模,以此来重新配置后端服务器节点的默认网关地址。还可以通过把编号为i的负载均衡器节点所对应的服务器节点重新和其它负载均衡器节点对应,而其它的服务器节点不再重新改动以此来提高效率,也就是当编号为i的负载均衡器节点失效时,将该负载均衡器节点所对应的后端服务器节点重新划分给其它负载均衡器节点。When the load balancer node numbered i fails, the load balancer node needs to be removed from the system. Now the total number of load balancer nodes is n-1, and the remaining load balancer nodes are restarted from 0. number, and then re-modulo n-1 the number of the backend server node to reconfigure the default gateway address of the backend server node. It is also possible to improve efficiency by re-corresponding the server node corresponding to the load balancer node numbered i to other load balancer nodes, and the other server nodes will not be changed again to improve efficiency, that is, when the load balancer node numbered i When the load balancer node fails, the backend server node corresponding to the load balancer node is reassigned to other load balancer nodes.

为了实现可扩展性,当需要增加负载均衡器节点时,同样需要进行重新划分,算法如下:In order to achieve scalability, when it is necessary to add load balancer nodes, it is also necessary to re-partition, the algorithm is as follows:

当已有n台负载均衡器节点,且需新增加一台时,给其编号为n,负载均衡器总数现在是n+1,然后重新将后端服务器节点的编号对n+1取模,以此来重新配置后端服务器节点的默认网关地址。还可以通过把和原来的n台负载均衡器节点对应的服务器节点取出一些重新对应到新加入的这台负载均衡器节点上,而其它服务器节点不再重新改动以此来提高效率。也就是当新增加一台负载均衡器节点时,给其编号为n,然后把后端服务器节点取出(m/n+1)(向下取整)台划分给该新加入的负载均衡器节点,也就是在原来的n台负载均衡器节点中,从每台所对应的服务器节点中取出(m/n*(n+1))(向下取整)台来,使其重新对应到新加入的负载均衡器节点上,重新修改其默认网关地址。When there are n load balancer nodes and a new one needs to be added, number it n, the total number of load balancers is now n+1, and then re-modulate the number of the backend server node to n+1, Use this to reconfigure the default gateway address of the backend server node. It is also possible to improve efficiency by removing some of the server nodes corresponding to the original n load balancer nodes and re-corresponding to the newly added load balancer node, while other server nodes will not be changed again. That is, when a new load balancer node is added, it is numbered n, and then the back-end server node is taken out (m/n+1) (rounded down) and assigned to the newly added load balancer node , that is, in the original n load balancer nodes, take (m/n*(n+1)) (rounded down) from each corresponding server node, and make it correspond to the newly added On the load balancer node, re-modify its default gateway address.

若负载衡器节点是基于Linux操作系统的主机实现的,则处理返回数据包时,需要在LINUX操作系统内核中打开自己的数据转发功能来保证对目的IP不是自身IP的数据包进行转发。If the load balancer node is implemented based on the host of the Linux operating system, when processing the returned data packet, it is necessary to open its own data forwarding function in the kernel of the Linux operating system to ensure that the data packet whose destination IP is not its own IP is forwarded.

Claims (1)

1, the implementation method of two-way load equalizing mechanism in a kind of multiple machine servicer system is characterized in that, carries out as follows:
1) in the multiple machine servicer system that is constituted by a lot of platform computers, form SiteServer LBS with one or more load equalizer node, every equalizer node all has two ethernet ports, and one links to each other with external network, is responsible for receiving the request data package of client; Another links to each other with internal network, is responsible for and back-end server system communication;
2) when the client-requested packet arrives, the transmission destination that load equalizer node in the SiteServer LBS is selected request data package according to the load and the survival condition of back-end server node, the target MAC (Media Access Control) address of packet is revised as the MAC Address of selected back-end server node, gives this back-end server node with this packet delivery then;
3) to have a network interface card at least be in the same network segment for load equalizer node and back-end server node, to guarantee just can to make packet arrival purpose back-end server node by the target MAC (Media Access Control) address of revising packet;
4) the internal IP address of all load equalizer nodes and back-end server IP addresses of nodes information are write in the configuration file, be kept in the control desk of SiteServer LBS;
5) on the control desk of SiteServer LBS, the keeper carries out numbering since 0 continuous integral number to load equalizer node and back-end server node respectively, with these numberings load equalizer node sum is carried out delivery, if certain back-end server node be numbered j, j carries out delivery to the load equalizer node, the delivery result is i, then with the internal IP address of i platform load equalizer node default gateway address, by this method with the default gateway address as each back-end server node of the internal IP address equilibrium of load equalizer node as this back-end server node;
6) after the request data package that the intact client of back-end server node processing brings in, return data is transmitted to own corresponding default gateway, just Dui Ying load equalizer node;
7) when the load equalizer node is based on the main frame realization of (SuSE) Linux OS, the load equalizer node will be revised the (SuSE) Linux OS kernel, allow (SuSE) Linux OS kernel reception sources IP identical with self IP and be from the outside packet, and in linux kernel, open the forwarding capability of self, the return data bag that the back-end server node is sent directly is forwarded to outer net;
8) when the load equalizer node increases or deletes, the program of garrisoning on the control desk of SiteServer LBS can be revised the internal IP address information table of the load equalizer node of preserving automatically, carry out delivery again then, back-end server node and load equalizer node are repartitioned, the internal IP of the load equalizer node of operate as normal is given the default gateway address of each back-end server node as them again, to guarantee the high availability and the load balancing of equalizer system;
9) when the back-end server node increases or deletes, the keeper will carry out since 0 continuous integer numbering the back-end server node on the control desk of SiteServer LBS again, with new numbering load equalizer node sum is carried out delivery, reconfigure the default gateway of back-end server node.
CNB2006100427623A 2006-04-30 2006-04-30 A Realization Method of Two-way Load Balancing Mechanism in Multi-machine Server System Expired - Fee Related CN100435530C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006100427623A CN100435530C (en) 2006-04-30 2006-04-30 A Realization Method of Two-way Load Balancing Mechanism in Multi-machine Server System

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100427623A CN100435530C (en) 2006-04-30 2006-04-30 A Realization Method of Two-way Load Balancing Mechanism in Multi-machine Server System

Publications (2)

Publication Number Publication Date
CN1859313A CN1859313A (en) 2006-11-08
CN100435530C true CN100435530C (en) 2008-11-19

Family

ID=37298177

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100427623A Expired - Fee Related CN100435530C (en) 2006-04-30 2006-04-30 A Realization Method of Two-way Load Balancing Mechanism in Multi-machine Server System

Country Status (1)

Country Link
CN (1) CN100435530C (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217420B (en) * 2007-12-27 2011-04-20 华为技术有限公司 A linkage processing method and device
CN101500005B (en) * 2008-02-03 2012-07-18 北京艾德斯科技有限公司 Method for access to equipment on server based on iSCSI protocol
CN101557388B (en) * 2008-04-11 2012-05-23 中国科学院声学研究所 NAT traversing method based on combination of UPnP and STUN technologies
CN101276289B (en) * 2008-05-09 2010-06-16 中兴通讯股份有限公司 The method of communication between user and multi-core in Linux system
CN101404619B (en) * 2008-11-17 2011-06-08 杭州华三通信技术有限公司 Method for implementing server load balancing and a three-layer switchboard
KR101433816B1 (en) 2010-06-18 2014-08-27 노키아 솔루션스 앤드 네트웍스 오와이 Server cluster
CN102497652B (en) * 2011-12-12 2014-07-30 武汉虹信通信技术有限责任公司 Load balancing method and device for large-flow data of code division multiple access (CDMA) R-P interface
CN102523302B (en) * 2011-12-26 2015-08-19 华为数字技术(成都)有限公司 The load-balancing method of cluster virtual machine, server and system
CN104580391A (en) * 2014-12-18 2015-04-29 国云科技股份有限公司 A Method for Improving Server Bandwidth Applicable to Cloud Computing
CN105554176B (en) * 2015-12-29 2019-01-18 华为技术有限公司 Send the method, apparatus and communication system of message
CN110198337B (en) * 2019-03-04 2021-10-08 腾讯科技(深圳)有限公司 Network load balancing method and device, computer readable medium and electronic equipment
CN111010342B (en) * 2019-11-21 2023-04-07 天津卓朗科技发展有限公司 Distributed load balancing implementation method and device
CN111338454B (en) * 2020-02-29 2021-08-03 苏州浪潮智能科技有限公司 A system and method for server power load balancing
CN111556177B (en) * 2020-04-22 2021-04-06 腾讯科技(深圳)有限公司 Network switching method, device, equipment and storage medium
CN114816723A (en) * 2021-01-29 2022-07-29 中移(苏州)软件技术有限公司 A load balancing system, method and computer readable storage medium
CN117240681B (en) * 2023-08-04 2025-03-07 安徽助行软件科技有限公司 Data packet processing method in LVS load balancing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020035225A (en) * 2000-11-04 2002-05-11 남민우 Method and apparatus of server load balancing using MAC address translation
CN1403934A (en) * 2001-09-06 2003-03-19 华为技术有限公司 Load balancing method and equipment for convective medium server
US6567377B1 (en) * 1999-03-18 2003-05-20 3Com Corporation High performance load balancing of outbound internet protocol traffic over multiple network interface cards
CN1426211A (en) * 2001-12-06 2003-06-25 富士通株式会社 Server load sharing system
JP2004118622A (en) * 2002-09-27 2004-04-15 Jmnet Inc Load distributor, and method and program for the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6567377B1 (en) * 1999-03-18 2003-05-20 3Com Corporation High performance load balancing of outbound internet protocol traffic over multiple network interface cards
KR20020035225A (en) * 2000-11-04 2002-05-11 남민우 Method and apparatus of server load balancing using MAC address translation
CN1403934A (en) * 2001-09-06 2003-03-19 华为技术有限公司 Load balancing method and equipment for convective medium server
CN1426211A (en) * 2001-12-06 2003-06-25 富士通株式会社 Server load sharing system
JP2004118622A (en) * 2002-09-27 2004-04-15 Jmnet Inc Load distributor, and method and program for the same

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Web服务器的负载均衡". 杨厚群,康耀红,魏应彬.计算机工程 增刊,第26卷. 2000
"基于机群的网络服务器系统构架研究". 范新媛,徐国治,陈研,王东民.上海大学学报(自然科学版)增刊,第8卷. 2002
"Web服务器的负载均衡". 杨厚群,康耀红,魏应彬.计算机工程 增刊,第26卷. 2000 *
"基于机群的网络服务器系统构架研究". 范新媛,徐国治,陈研,王东民.上海大学学报(自然科学版)增刊,第8卷. 2002 *

Also Published As

Publication number Publication date
CN1859313A (en) 2006-11-08

Similar Documents

Publication Publication Date Title
CN100435530C (en) A Realization Method of Two-way Load Balancing Mechanism in Multi-machine Server System
US10547544B2 (en) Network fabric overlay
US9509615B2 (en) Managing link aggregation traffic in a virtual environment
US9008102B2 (en) Redundancy of network services in restricted networks
US9253245B2 (en) Load balancer and related techniques
US10120729B2 (en) Virtual machine load balancing
US9350666B2 (en) Managing link aggregation traffic in a virtual environment
US20090094610A1 (en) Scalable Resources In A Virtualized Load Balancer
EP4320839A1 (en) Architectures for disaggregating sdn from the host
WO2014022168A1 (en) System and method for virtual ethernet interface binding
US9686178B2 (en) Configuring link aggregation groups to perform load balancing in a virtual environment
US20150271075A1 (en) Switch-based Load Balancer
CN106301859A (en) A kind of manage the method for network interface card, Apparatus and system
US11516125B2 (en) Handling packets travelling towards logical service routers (SRs) for active-active stateful service insertion
CN104079668B (en) A kind of DNS load balancing adjusting method and system
EP4320516A1 (en) Scaling host policy via distribution
US11647083B2 (en) Cluster-aware multipath transmission control protocol (MPTCP) session load balancing
Chen et al. A scalable multi-datacenter layer-2 network architecture
CN112073503A (en) High-performance load balancing method based on flow control mechanism
CN117561705A (en) Routing policies for graphics processing units
WO2022216432A1 (en) Architectures for disaggregating sdn from the host
US20220217202A1 (en) Capability-aware service request distribution to load balancers
CN114039894B (en) Network performance optimization method, system, device and medium based on vector packet
CN106375427A (en) A distributed SAN storage system link redundancy optimization method
CN117597894A (en) Routing policies for graphics processing units

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081119

Termination date: 20110430