Method for distributing weights by cloud platform aiming at multi-activity load balancing
Technical Field
The invention relates to the field of cloud computing, in particular to a method for distributing weights by a cloud platform aiming at multi-activity load balancing.
Background
Cloud computing is one of the most popular topics in the field of IT infrastructure in recent years, and is used for virtualizing and abstracting various resources such as computing, networks and storage, so that a very convenient resource using mode and flexible resource expansion capability are provided for users. Meanwhile, the rapid development of computer communication network and other technologies makes network overload and overload already be a family meal. In order to solve the network load problem, a load balancing technology is required to be utilized to provide more flexible, convenient, quick and efficient cloud computing service capability. Load balancing builds on existing network structures, which provide an inexpensive, efficient, transparent method to extend the bandwidth of network devices and servers, increase throughput, enhance network data processing capabilities, and increase flexibility and availability of the network.
There are many different load balancing techniques to meet different application requirements, such as software/hardware load balancing, local/global load balancing, higher network layer load balancing, and link aggregation techniques.
The existing multi-active load balancing depends on an equivalent route (realized by means of multipath routing in linux), the hash strategy is based on three layers (source IP hash) and four layers (source IP, destination IP, source port, destination port and transmission protocol hash), each next hop of the equivalent route is completely equal, and flexible adjustment cannot be performed according to the specific situation of each load balancing node.
The traditional multi-activity load balancing realized by using the equivalent route can be laterally expanded in a mode of increasing the number of virtual machines, but the resource utilization rate can be multiplied, and the load balancing nodes and the back-end servers are randomly distributed on the physical machines, so that network communication among virtual machines on a plurality of different physical machines can be increased, and the load and communication pressure of a physical network are increased.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method for distributing weights by a cloud platform aiming at multi-activity load balancing. By flexibly configuring the weight of the multipath route, access flow and connection number are more reasonably distributed, and the performance of qps queries per second of multi-activity load balancing realized by depending on the multipath route is enhanced.
The technical scheme of the invention is as follows:
A method of multi-lived load balancing allocation weights, comprising:
1) Grouping the load balancing clusters;
2) The weights of each packet are flexibly configured.
The grouping of the load balancing means grouping according to the position relation with the physical machine where the back-end server is located, and the grouping comprises an affinity group and an anti-affinity group.
Flexibly configuring the weight of each group, setting the weight of an affinity group as N (N > 1), setting the weight of an anti-affinity group as K (K > 1), and distributing the connection number according to the proportion of N to K; the weights can also be custom, but the affinity group weights are greater than the anti-affinity group weights.
Further, the method comprises the steps of,
Firstly, a multipath route supported by a linux system kernel is utilized, a destination address is issued on a virtual router vrouter as an equivalent route of a load balancing vip, so that the same destination address can use a plurality of links simultaneously in the network environment, and the next hop is a service network ip address of each load balancing node.
On the basis, when the virtual router vrouter issues a route with the destination address being the load balancing vip, the weight is adjusted according to the affinity relation between the load balancing node and the back-end server, if the affinity relation is the affinity, the weight is set to be higher, otherwise, the weight is set to be lower. Therefore, when the load balancing node communicates with the back-end server, virtual machine communication before the host machine crossing can be reduced, and the performance of the whole load balancing query number qps per second can be effectively improved.
The affinity group is a load balancing cluster which is positioned on the same physical machine with the back-end server, and loads more than half of network traffic, thereby playing a role of a main server; the anti-affinity group is a load balancing cluster which is positioned on different physical machines with the back-end server and is responsible for less than half of network traffic, and plays a role of a standby server.
Still further, the method comprises the steps of,
After the configuration of both the load balancing node and the virtual router vrouter is completed, the configuration of the external network access nat is performed.
Firstly, binding vip with a floating ip, and then configuring nat on an outlet fireproof wall by using a public network ip and the floating ip; the traffic accessing the public network ip first reaches the exit firewall, a dnat is made on the exit firewall, the destination ip is converted into vip-bound fip with balanced load, then the vip-bound fip is passed through the snat ns of the network node, the virtual router vrouter is reached, the traffic is forwarded to the load balancing node according to the equivalent route with the destination ip being vip, and the load balancing node forwards the traffic to the final application server according to the configuration of the nginx.
Still further, the method comprises the steps of,
The client of the public network is used for accessing the public network ip to verify the accessibility of the public network, client1 and client2 of the intranet are used for accessing vip, and the accessibility of the intranet is verified; and testing the newly-built connection number to check whether the distribution of the connection number is consistent with the set proportion.
The invention has the beneficial effects that
1) Virtual machine communication before host machine crossing is reduced, and physical network pressure is reduced;
2) The virtual router vrouter distributes more connection numbers and network traffic to the load balancing nodes compatible with the backend server, so that the overall performance of the load balancing of the query number per second qps can be improved.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described in the following embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by persons of ordinary skill in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
The invention provides a method for distributing weight for multi-activity load balancing applied to a cloud platform, which comprises the steps of firstly utilizing multipath routing supported by a linux system kernel, and sending down an equivalent route with a destination address being a load balancing vip on a virtual router vrouter, so that the same destination address can simultaneously use a plurality of links in the network environment, and the next hop is a service network ip address of each load balancing node. On the basis, when the virtual router vrouter issues a route with the destination address being the load balancing vip, the weight is adjusted according to the affinity relation between the load balancing node and the back-end server, if the affinity relation is the affinity, the weight is set to be higher, otherwise, the weight is set to be lower. Therefore, when the load balancing node communicates with the back-end server, virtual machine communication before the host machine crossing can be reduced, and the performance of the whole load balancing query number qps per second can be effectively improved.
The load balancing clusters are divided into two groups, one affinity group and one anti-affinity group. The affinity group is a load balancing cluster which is positioned on the same physical machine as the back-end server, and loads most network traffic to play a role of a main server. The anti-affinity group is a load balancing cluster which is positioned on different physical machines with the back-end server and is responsible for small network traffic and plays a role of a standby server.
The specific environment construction process is as follows:
1. the environment preparation requires two networks, namely a service management network, a control console carries out relevant configuration on the load balancing nodes through the network, such as configuration of a load balancing device, configuration of a virtual machine system of the load balancing nodes and the like, and is mainly used for management, and service flow cannot pass through the network card; and secondly, a tenant service network, a private virtual network where a back-end server is located, is used for providing vip of external service, communicating with the back-end server and the like.
2. The two back-end servers, server1 and server2, and the server end deploys a simple web service for testing the forwarding effect of load balancing.
3. Applying a vip under the tenant service network, creating two groups of virtual machines by the load balancing node, wherein one group of virtual machines is used as an affinity group, is created on the same physical machine as a back-end server and is used as a cluster for bearing main traffic, the other group of virtual machines is used as an anti-affinity group, is created on a different physical machine from the back-end server and is used as a standby server group, and a small part of traffic is borne. The vip and the load balancer are configured on all load balancing nodes, and the vip is configured on the lo network card in a IP ADDR ADD VIP/32 devlo mode. In addition to the configuration of the load balancer, the arp reply for vip on the load balancing node needs to be turned off, using the following commands:
echo 1>/proc/sys/net/ipv4/conf/eth1/arp_ignore
echo 1>/proc/sys/net/ipv4/conf/all/arp_ignore
echo 1>/proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2>/proc/sys/net/ipv4/conf/eth1/arp_announce
echo 2>/proc/sys/net/ipv4/conf/all/arp_announce
echo 2>/proc/sys/net/ipv4/conf/lo/arp_announce
The method is used for closing the arp response on the network card and starting the arp response on the virtual switch, so that the traffic of the vip accessed by the intranet or the extranet firstly passes through the virtual router vrouter and is distributed by the virtual router vrouter, and the situation that two load balancing nodes answer the arp response to cause brain cracks is avoided.
4. The equivalent route is issued in the virtual router vrouter, the weight of the affinity group is set to 3, the weight of the anti-affinity group is set to 1, and the traffic allocation is configured according to a three-to-one policy, so that when the access traffic reaches the virtual router vrouter, a tcp connection is established according to a ratio of five to one, for example, 400 new connection numbers, 300 new connection numbers are allocated to the affinity group, and 100 new connection numbers are allocated to the anti-affinity group. Taking 172.16.122.1 as vip,172.16.122.2 and 172.16.122.3 as tenant service networks ip of the affinity group, 172.16.122.4 and 172.16.122.5-bit anti-affinity group as examples, the gateway network card name qr-2b749a55-20 executes the following commands in the virtual router vrouter:
ip route add 172.16.122.1 nexthop via 172.16.122.2 dev qr-2b749a55-20 weight 1nexthop via 172.16.122.3dev qr-2b749a55-20weight 1nexthop via 172.16.122.4dev qr-2b749a55-20 weight 5nexthop via 172.16.122.5dev qr-2b749a55-20 weight 5
and turns on the arp proxy in virtual router vrouter using the following commands:
echo 1>/proc/sys/net/ipv4/conf/qr-2b749a55-20/proxy_arp
echo 1>/proc/sys/net/ipv4/conf/qr-2b749a55-20/proxy_arp_pvlan
Therefore, the traffic of the vip accessed by the intranet or the extranet firstly passes through the virtual router vrouter, and the virtual router vrouter distributes the traffic, so that the situation that the two load balancing nodes answer the response of the vip and cause brain fracture is avoided.
5. After the configuration of both the load balancing node and the virtual router vrouter is completed, the configuration of the external network access nat is performed. First, vip needs to be bound to a floating ip, and then a nat is configured on the outlet firewall by using the public network ip and the floating ip (provided that the floating network is in communication with the outlet firewall). The traffic accessing the public network ip first reaches the exit firewall, a dnat is made on the exit firewall, the destination ip is converted into vip-bound fip with balanced load, then the vip-bound fip is passed through the snat ns of the network node, the virtual router vrouter is reached, the traffic is forwarded to the load balancing node according to the equivalent route with the destination ip being vip, and the load balancing node forwards the traffic to the final application server according to the configuration of the nginx.
6. And accessing the public network ip by using the client of the public network to verify the accessibility of the public network, accessing vip by using the client1 and the client2 of the intranet, and verifying the accessibility of the intranet. And testing the newly-built connection number to see whether the distribution of the connection number is consistent with the set five-to-one strategy.
Partial noun interpretation herein: vrouter, namely a virtual machine router; load balancing vip, namely a virtual address of load balancing; qps, query per second, is a measure of how much traffic a particular query server handles in a specified time.
The foregoing description is only illustrative of the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.