CN115632987A - Load balancing method based on DNS and route issuing control - Google Patents
Load balancing method based on DNS and route issuing control Download PDFInfo
- Publication number
- CN115632987A CN115632987A CN202211201467.3A CN202211201467A CN115632987A CN 115632987 A CN115632987 A CN 115632987A CN 202211201467 A CN202211201467 A CN 202211201467A CN 115632987 A CN115632987 A CN 115632987A
- Authority
- CN
- China
- Prior art keywords
- site
- backup
- dns
- client
- backup group
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000001514 detection method Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 6
- 238000011084 recovery Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000009977 dual effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
- H04L47/125—Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0654—Management of faults, events, alarms or notifications using network fault recovery
- H04L41/0663—Performing the actions predefined by failover planning, e.g. switching to standby network elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0654—Management of faults, events, alarms or notifications using network fault recovery
- H04L41/0668—Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application discloses a load balancing method based on DNS and route issuing control. The method can comprise the following steps: establishing at least one backup group, and configuring the priority of the site in each backup group, wherein each backup group comprises a main site and at least one backup site; each backup group issues an IP address of a corresponding master site; the global service load sharing device responds to the request of the client and determines the backup groups for providing service for the client according to the load condition of each backup group; the client accesses the master site of the backup group through the IP address, and the master site determines a server for providing service for the client according to the load condition of the server. The invention can accurately control the load sharing proportion, realizes the coexistence of the main and standby load sharing among the sites, and has high availability while balancing the load.
Description
Technical Field
The present invention relates to the field of load balancing, and more particularly, to a load balancing method based on DNS and route distribution control.
Background
The global load balancing based on DNS resolution is a commonly used method, but in order to reduce the resolution pressure of a DNS server, the standard specifies that DNS adopts a caching mechanism: the local DNS stores the DNS resolution result, usually more than 30 minutes, and when there is a new resolution request, the local DNS does not perform iterative resolution to the upper level and the authoritative DNS, but directly sends the buffered result to the requester. This results in that when the server corresponding to the original parsing result fails, the client served by the server cannot obtain service before the cache is aged. For this situation, there are the following typical schemes in the industry:
If an Internet connection fails, a power failure, a switch or local Server Load Balancing (SLB) device fails, or a denial of service (DoS) attack occurs, or if a catastrophic event results in the entire site being lost, the GSLB device should detect the failure and route the request to the remaining site so that the client connection succeeds and the traffic can continue. The specific process is as follows:
1) The client requests DNS resolution through local DNS, and GSLB returns two addresses as authoritative domain name DNS: 1.1.1.1 and 2.2.2.2 respond to a query for FQDN www.web.net. And the Loacl DNS reorders the received resolution addresses and sends the resolution addresses to the client.
2) The client selects one of the resolved address accesses (usually the first one).
3) SLB1 at site a detects the server failure and stops advertising via BGP that it is IP address 1.1.1.1 (or that the SLB or connection is broken, in which case the advertisement also stops significantly).
4) When the client fails to access the site A, another address 2.2.2.2 of the DNS resolution address is automatically selected for access.
The problems of the scheme are as follows:
1. the master/slave operation mode cannot be realized.
2. Load balancing cannot be controlled. Which site the client is connected to depends entirely on the order of the local DNS addresses and the address selected by the client. GSLB cannot be controlled.
3. The back cutting is slow. If site a fails, clients originally connected to site a switch to site B, and when site a recovers, these clients cannot switch back to site a quickly.
Scheme 2: the method adopts the anycast address, and comprises the following specific processes:
1) The server load balancers (or routers) of sites a and B each advertise to the Internet "my IP address is 1.1.1.1". Internet routers exchange metrics through BGP and propagate this information to the router closest to the client.
2) (DNS resolution occurs, returning only one IP address 1.1.1.1 in response to a query to FQDN www.web.net).
3) The client now connects to the closest site in location (say site a).
4) SLB1 at site a detects the server failure and stops advertising via BGP that it is IP address 1.1.1.1 (or that the SLB or connection is broken, in which case the advertisement also stops significantly).
5) Route convergence occurs between Internet routers, eventually deleting the path to site a.
6) The client is now connected to IP address 1.1.1.1, but at site B.
The problems of the scheme are as follows:
1. the site to which the client is connected switches with route switching. The Internet routing is very complex, and the sites connected by the client change along with the routing change of the anycast address, so that the session of the client is interrupted. Further, if the routing map changes during a client session, packets may flow intermittently to site a and site B, so the client cannot successfully connect even if both sites are operating properly.
2. The anycast address may not be issued. Typically, a user purchases an IP address from a certain operator and assigns one as an anycast address, which may only be allowed to be issued from one area. If the two sites are located in different areas (disaster scenarios need to be in different locations, respectively), the anycast address of one of the sites may not be allowed to be released when connecting to the operator. It is more difficult to release from two operators simultaneously if the two sites are connected to different operators, respectively.
3. Load balancing cannot be controlled. Which site a client connects to depends entirely on which site the client is routed to.
Scheme 3: based on the DNS responding to only one address, timeout reparse after failure. The specific process comprises the following steps:
the method selects only one site at a time for resolution, and responds to DNS requests with 1 IP. When the selected site fails, the DNS server receives a new DNS request of the client after the cache timeout of the local DNS and the client browser is needed, and the DNS server responds to the new DNS request by using the IP of the standby site after monitoring the failure of the original main site, so that the user is switched to the standby site.
The problems of the scheme are as follows: the scheme can realize the active-standby between the sites, but the active-standby switching usually needs to wait for more than 30 minutes, and the availability is too low.
Therefore, it is necessary to develop a load balancing method based on DNS and route distribution control.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
The invention provides a load balancing method based on DNS and route issuing control, which can accurately control the load sharing proportion, realize the coexistence of main and standby loads and load sharing among sites, and has high availability while balancing the load.
The load balancing method based on DNS and route issuing control comprises the following steps:
establishing at least one backup group, and configuring the priority of the site in each backup group, wherein the backup group comprises a main site and at least one backup site;
each backup group issues the IP address of the corresponding master site;
the global service load sharing device responds to the request of the client and determines the backup groups for providing service for the client according to the load condition of each backup group;
and the client accesses the master site of the backup group through the IP address, and the master site determines a server for providing service for the client according to the load condition of the server.
Preferably, if the primary site in the backup group fails, the site with the highest priority in the non-failed backup sites is determined to be the new primary site.
Preferably, each backup group determines one master site and a plurality of backup sites within the backup group according to priority, configuration policy, site availability and algorithm.
Preferably, if the site includes a plurality of servers, an SLB is set in the site, and the SLB is connected to the plurality of servers and is configured to determine a server that provides a service for the client according to a load condition of the server.
Preferably, detecting site faults by VRRP includes:
establishing a VPLS, wherein each site SLB selects an interface to be connected into the VPLS, and the interface configures an IP address at an SLB end to enable the interfaces configured by the SLBs on all the sites to be in a network segment;
the detection protocol detects whether there is a failure in all sites.
Preferably, the method further comprises the following steps:
and if the fault of the original master site is recovered, changing the master site from the new master site to the master site.
Preferably, when the number of backup groups is more than one, different backup groups can contain the same site.
Preferably, the master site of one backup group can serve as a backup site of the other backup group.
Preferably, the same site has different IP addresses in different backup groups.
Preferably, the backup site includes a local backup site and a remote backup site, and the priority of the remote backup site is after the local backup site.
The beneficial effects are that:
1. the method has high availability while realizing load balance, and the convergence time is in the order of seconds. Where failure aware and status negotiation may typically be done within 4s, fast monitoring at configuration BFD may be done within 1s, route publication/withdrawal and convergence throughout the network typically within 3 s.
2. By adopting a plurality of backup groups, double-active load sharing can be performed among a plurality of data centers, and the load sharing proportion can be accurately controlled.
3. And realizing the coexistence of main and standby and load sharing among the sites.
The method of the present invention has other features and advantages which will be apparent from or are set forth in detail in the accompanying drawings and the following detailed description, which are incorporated herein, and which together serve to explain certain principles of the invention.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings, wherein like reference numerals generally represent like parts in the exemplary embodiments of the present invention.
Fig. 1 shows a flow chart of the steps of a load balancing method based on DNS and route distribution control according to an embodiment of the present invention.
FIG. 2 shows a schematic diagram of a 2-ground 3-centric scenario, according to one embodiment of the present invention.
Figure 3 illustrates a diagram of employing VRRP to detect failures and negotiate a determined state, according to one embodiment of the invention.
FIG. 4 shows a schematic diagram of a dual activity data center scenario, according to one embodiment of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in more detail below. While the following describes preferred embodiments of the present invention, it should be understood that the present invention may be embodied in various forms and should not be limited by the embodiments set forth herein.
Fig. 1 shows a flow chart of the steps of a load balancing method based on DNS and route distribution control according to an embodiment of the present invention.
As shown in fig. 1, the load balancing method based on DNS and route distribution control includes: step 101, establishing at least one backup group, and configuring the priority of the site in each backup group, wherein the backup group comprises a main site and at least one backup site; 102, each backup group issues an IP address of a corresponding master site; 103, the global service load sharing device responds to the request of the client and determines the backup groups for providing service for the client according to the load condition of each backup group; and 104, the client accesses the master site of the backup group through the IP address, and the master site determines a server for providing service for the client according to the load condition of the server.
In one example, if a primary site in a backup group fails, the highest priority site of the non-failed backup sites is determined to be the new primary site.
In one example, each backup group determines a primary site and backup sites within the backup group based on priority, configuration policy, site availability, and algorithms.
In one example, if the site includes a plurality of servers, one SLB is set in the site, and the SLB is connected to the plurality of servers and is used for determining a server providing a service for the client according to a load condition of the server.
In one example, detecting site failures by VRRP includes:
establishing a VPLS, wherein each site SLB selects an interface to be connected into the VPLS, and the interface configures an IP address at an SLB end to enable the interfaces configured by the SLBs on all the sites to be in a network segment;
the detection protocol detects whether there is a failure in all sites.
In one example, further comprising:
and if the fault of the original master site is recovered, changing the master site from the new master site to the master site.
In one example, when the number of backup groups is more than one, different backup groups can contain the same site.
In one example, the primary site of one backup group can be the backup site of another backup group.
In one example, the same site has different IP addresses in different backup groups.
In one example, the backup sites include a local backup site and a displaced backup site, the displaced backup site having a priority after the local backup site.
Specifically, at least one backup group is established, the priority of the site in each backup group is configured, the priority comprises the configured or/and calculated priority value of each site, for example, a priority is fixedly configured, and the priority value can be dynamically calculated according to the state of the site (for example, the number or the proportion of available servers). The algorithm may be simply calculated according to the priority level, the highest priority level, and the other priority levels, and may also be calculated according to the available bandwidth of the external service of the site, the current load level, and the like, but it must be ensured that each site adopts the same algorithm, and the calculation results are consistent. The backup group comprises a main site and at least one backup site, if the site comprises a plurality of servers, one SLB is arranged in the site, the SLB is connected with the plurality of servers, and if the site only comprises one server, the SLB is not required to be arranged. Each backup group determines a master site and a plurality of backup sites within the backup group according to priority, configuration policy, site availability and algorithm. When the number of the backup groups is more than one, different backup groups can contain the same site, wherein the master site of one backup group can be used as the local backup site of other backup groups, and the IP addresses of the same site in different backup groups are different. The backup site comprises a local backup site and a remote backup site, and the priority of the remote backup site is behind the local backup site.
Each site configures a loopback interface for each backup group and allocates an IP address for providing service for the client, and the IP addresses of the same site in different backup groups are different; and the route of the Loopback interface IP of the SLB of the main site in the backup group is issued to the network, and the route of the Loopback interface IP of the SLB of the standby site is not issued. The method for issuing the route is different according to different routing protocols, for example, if the OSPF protocol is adopted, OSPF is enabled on a loopback interface when the loopback interface route needs to be issued, and OSPF is disabled on the loopback interface when the loopback interface needs to be revoked; if the BGP protocol is adopted, introducing the address route of the loopback interface into the BGP when the loopback interface route needs to be published, and deleting the address route of the loopback interface and introducing the address route into the BGP when the loopback interface route needs to be revoked; if the static route is adopted, a route to the address of the loopback interface is generated when the loopback interface route needs to be issued, and the route of the address of the loopback interface is deleted when the loopback interface route needs to be cancelled.
A global service load sharing device (GSLB) responds to the request of a client and determines backup groups for providing service for the client according to the load condition of each backup group; the client accesses the master site of the backup group through the IP address, and the SLB of the master site determines a server for providing service for the client according to the load condition of the server.
Detecting site faults by VRRP, comprising: establishing a VPLS, wherein an interface is selected by an SLB of each site to be connected into the VPLS, and the interface configures an IP address at an SLB end to enable the interfaces configured by the SLBs on all the sites to be in a network segment; the detection protocol detects whether there is a failure in all sites. The VRRP protocol is only one of the methods for implementing state negotiation, and other methods, such as ICCP, may also be implemented as an extended application of the base protocol.
And if the main site in the backup group fails, determining that the site with the highest priority in the local backup sites which do not fail is the new main site. Because the new main site's loopback interface address has already been returned to the customer end as the response of DNS analysis request, after the customer end is out of order in the original main site, the original main IP address can't be reached, try to use other IP addresses that DNS analyzed as the purpose IP to ask for service. At this time, because only the new primary IP can be reached, the client request is switched to the site where the new primary IP is located. Thereby realizing automatic switching.
And if the fault of the original master site is recovered, changing the master site from the new master site to the master site. After the original master is recovered, the policy configured by the user may not be switched back, and at this time, the master is not changed, and no switching occurs.
The client side comprises: the entities using the services provided by the server are distributed anywhere in the network.
Local DNS (Domain Name System): in the DNS system, a server directly responding to a domain name request of a client has its address configured on the client, and a Local DNS generally provides domain name resolution service for a plurality of clients. The Local DNS obtains the resolution result by repeatedly searching the Internet root domain name server and the authoritative domain name server. Local DNS usually has cache, if the resolution request of the client comes, the client is directly responded by the result in the cache, and the query is not sent to the root domain name at the moment.
SLB (Server Load Balancer): the load sharing of the service provided by the plurality of servers to the client is realized, and the request of the client can be distributed to a proper server for processing according to the factors of the CPU, the memory, the use condition of the network bandwidth and the like of the server.
GSLB (Global Server Load Balancer): and the global service load sharing device distributes the request of the client to the proper site according to the processing capacity, the current load condition, the availability and the like of each site. The load balance, the main backup, the remote disaster recovery and the like of a plurality of sites can be realized.
A server: the entity that provides a service to a client is typically application software (software that provides some service to the client) running on a physical server. These service entities may be distributed across multiple physical servers or virtual machines.
Site: refers to a service area at a physical location, such as a data center.
To facilitate understanding of the solution of the embodiments of the present invention and the effects thereof, three specific application examples are given below. It will be appreciated by persons skilled in the art that this example is merely for the purpose of facilitating understanding of the invention, and that any specific details thereof are not intended to limit the invention in any way.
Example 1
FIG. 2 shows a schematic diagram of a 2-ground 3-centric scenario, according to one embodiment of the present invention.
This embodiment is a 2 nd 3 center scenario: two data centers are in a working state at the same time, share loads with each other and take over all services after one of the two data centers fails; there is another off-site data center that provides service after all of the first 2 data centers fail, as shown in fig. 2.
Description of the system: a scenario of sharing and backup by multiple sites may typically include 2 sites, one of which is used as a primary and the other of which is used as a backup, or two of which are used for load sharing and are also used as a backup of the other. 3 scenes such as two places and three centers: two different parties in one city are used as main load sharing, and the third party is used as standby in the other city (when the other two cities are failed).
Each site has an SLB as the load sharing device in the site, and the SLB determines the server providing service for the client by adopting a specified algorithm according to the state of each server in the site and the configured strategy, and forwards the request to the server providing service.
Determining backup groups, determining the backup groups according to service reliability requirements, site processing capacity and the like, wherein each backup group selects one main use and a plurality of standby uses in the group according to priority, configured strategies, site availability, algorithms and the like. One site in each backup group is used as a master site, and the other site or sites are used as a backup site for taking over service in sequence after the master site fails. For example, two backup groups are in two places and three centers: the system comprises a backup group 1 and a backup group 2, wherein the backup group 1 comprises a main siteA, a standby siteB and a siteC, and the backup group 2 comprises a main siteB, a standby siteA and a standby siteC. Two backup groups means that the normal case can be served by 2 sites simultaneously. When two sites are in active state at the same time to share the load, just one different site in the active state is allowed in each backup group.
Each site configures a loopback interface for each backup group and allocates an IP address for providing service to the client. Backup group 1 was assigned 1.1.1.1 for SLB at siteA, 4.4.4.4 for SLB at siteB, and 5.5.5.5 at siteC. Backup group 2 was assigned 2.2.2.2 for SLB at siteB, 3.3.3.3 for SLB at siteA, and 6.6.6.6 at siteC.
And issuing the route of the Loopback interface IP of the SLB of the master site in the backup group to the network. The route of the Loopback interface IP of the SLB of the standby site is not released, and the original released route needs to be cancelled.
The GSLB monitors the state of each SLB by ICMP, UPD, etc. methods to obtain the load condition. GSLB provides domain name resolution, and determines the backup group providing service for client according to the state and load condition of each SLB, the configured strategy and algorithm, and returns the IP address list corresponding to the backup group to the client.
For example, in the figure, a client requests DNS resolution through local DNS, GSLB serves as an authoritative domain name to determine that the backup group 1 provides service, and the DNS returns a resolution result including: 1.1.1.1, 4.4.4.4 and 5.5.5.5.
The client selects one of the resolved addresses to access the service. However, because only the primary interface up is available, the client accesses the primary IP address in the backup set.
Detecting site failures among the same protection group members by adopting VRRP or other protocols, and determining the state: the priority of the main IP is the highest (configured) in default, and the priority of the standby IP is configured from the top to the bottom according to the protection sequence. Except the unavailable site, the highest priority is used as the primary, and the other sites are used as the standby.
Figure 3 illustrates a diagram of employing VRRP to detect failures and negotiate a determined state, according to one embodiment of the invention.
As shown in fig. 3, when a VPLS is established, the SLB of each site selects an interface to connect to the VPLS, and the interface configures an IP address at the SLB end, so that the interfaces configured by the SLBs on all sites are in a network segment. And configuring a VRRP group and a priority for each backup group in the network segment. The VRRP protocol selects one master and a plurality of backups according to the state (state representing site) and priority of each SLB.
When the primary site fails, the detection protocol detects the failure and triggers the calculation of new primary. In this example, if siteA fails, VRRP detects the failure and recalculates the new primary as siteB.
The VRRP protocol is only one of the methods for implementing state negotiation, and other methods, such as ICCP, may also be implemented as an extended application of the base protocol.
After the site corresponding to the IP with the highest priority fails, other sites detect the failure, and the other sites protecting the failed site recalculate the new primary usage by using the configured policy and algorithm, where the general policy is as follows: and after the unavailable site is eliminated, selecting the IP with the highest priority as the new main use. And releasing the route corresponding to the IP after the IP becomes the main use. For example, if the OSPF protocol is used, OSPF is enabled on the loopback interface corresponding to the IP. All clients can then access the address.
In this example, after siteA failure, the siteB would prefer the new master, and the route of SLB 2's 4.4.4.4 address is issued. After the client access 1.1.1.1 fails, the attempt to connect to 4.4.4.4 succeeds, thereby switching to siteB for service.
Because the new local address of the master site is returned to the client as the response of the DNS resolution request, the client tries to use other IP for DNS resolution as the destination IP request service if the original master IP is unreachable after the original master site fails. At this time, because only the new primary IP can be reached, the client request is switched to the site where the new primary IP is located. Thereby realizing automatic switching.
After the failure of the site corresponding to the original IP with the highest priority is recovered, other sites detect the recovery, and the other sites protecting the failed site recalculate the new primary use by using the configured policy and algorithm, where the general policy is as follows: and after the unavailable site is eliminated, selecting the IP with the highest priority as the new main use. Because the site of failure recovery has the highest priority, the failure recovery distributes the route of the corresponding IP, and if the other site has not the highest priority, the failure recovery withdraws the route of the corresponding IP.
And if the client is not reachable at the original master site, trying to use other IP resolved by the DNS as the target IP for requesting service. At this time, because only the new primary IP can be reached, the client request is switched to the site where the new primary IP is located.
After the original master site is recovered, the user configuration policy may not be switched back, and at this time, the master site of the "state negotiation" is not changed, and the switching will not occur.
Example 2
FIG. 4 shows a schematic diagram of a dual activity data center scenario, according to one embodiment of the present invention.
As shown in fig. 4, the present embodiment is a dual-activity data center scenario: two data centers (sites) work simultaneously, are in an active state, share load and simultaneously backup each other, and when one of the two data centers fails, the other data center takes over the service, namely, the other data center takes over the service of all clients.
This scenario is a simplification of embodiment 1, compared to embodiment 1, where the backup group has only two members (data centers) and the DNS resolution request for GSLB response includes only two addresses. If two data centers simultaneously break down (natural disaster), the service is interrupted without remote backup. The rest is the same as in example 1.
Example 3
Compared with embodiment 2, the present embodiment is a scenario of a master/standby data center: only 1 of the two data centers (sites) works (is in an active state), and when the data center in the active state (also called as an active state) fails, the other data center takes over the service.
Compared to embodiment 2, there is only one backup group, i.e. all client requests (load) are done by one site, the other is on standby. When the primary site fails, the state negotiation of the backup group can be automatically switched to another site in the primary state, so that all services are switched to a new primary site. The rest is the same as in example 2.
It will be appreciated by persons skilled in the art that the above description of embodiments of the invention is intended only to illustrate the benefits of embodiments of the invention and is not intended to limit embodiments of the invention to any examples given.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments.
Claims (10)
1. A load balancing method based on DNS and route distribution control is characterized by comprising the following steps:
establishing at least one backup group, and configuring the priority of the site in each backup group, wherein the backup group comprises a main site and at least one backup site;
each backup group issues the IP address of the corresponding master site;
the global service load sharing device responds to the request of the client and determines the backup groups for providing service for the client according to the load condition of each backup group;
and the client accesses the master site of the backup group through the IP address, and the master site determines a server for providing service for the client according to the load condition of the server.
2. The load balancing method based on DNS and routing distribution control according to claim 1, wherein if a master site in the backup group fails, a site with the highest priority among the backup sites that have not failed is determined as a new master site.
3. The load balancing method based on DNS and routing distribution control according to claim 1, wherein each backup group determines one master site and a plurality of backup sites within the backup group according to priority, configuration policy, site availability, and algorithm.
4. The load balancing method based on the DNS and route distribution control according to claim 1, wherein if the site includes a plurality of servers, an SLB is set in the site, and the SLB is connected to the plurality of servers and is configured to determine a server that provides a service for the client according to a load condition of the server.
5. The load balancing method based on DNS and route distribution control as claimed in claim 4, wherein detecting site failure by VRRP comprises:
establishing a VPLS, wherein an interface is selected by an SLB of each site to be connected into the VPLS, and the interface configures an IP address at an SLB end to enable the interfaces configured by the SLBs on all the sites to be in a network segment;
the detection protocol detects whether there is a failure in all sites.
6. The load balancing method based on DNS and route distribution control according to claim 1, further comprising:
and if the fault of the original master site is recovered, changing the master site from the new master site to the master site.
7. The load balancing method based on DNS and route distribution control according to claim 1, wherein when the number of backup groups is more than one, different backup groups can contain the same site.
8. The load balancing method based on DNS and route distribution control according to claim 7, wherein the master site of one backup group can be used as the backup site of the other backup group.
9. The method for load balancing based on DNS and route distribution control according to claim 8, wherein the IP addresses of the same site in different backup groups are different.
10. The load balancing method based on DNS and route distribution control according to any one of claims 1 to 9, wherein the backup site includes a local backup site and a remote backup site, and the remote backup site has a priority after the local backup site.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211201467.3A CN115632987A (en) | 2022-09-29 | 2022-09-29 | Load balancing method based on DNS and route issuing control |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211201467.3A CN115632987A (en) | 2022-09-29 | 2022-09-29 | Load balancing method based on DNS and route issuing control |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115632987A true CN115632987A (en) | 2023-01-20 |
Family
ID=84904141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211201467.3A Pending CN115632987A (en) | 2022-09-29 | 2022-09-29 | Load balancing method based on DNS and route issuing control |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115632987A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080320003A1 (en) * | 2007-06-25 | 2008-12-25 | Microsoft Corporation | Scaling network services using dns |
CN103458013A (en) * | 2013-08-21 | 2013-12-18 | 成都云鹰科技有限公司 | Streaming media server cluster load balancing system and balancing method |
CN106713499A (en) * | 2017-01-23 | 2017-05-24 | 天地融科技股份有限公司 | Load balancing method, equipment and system |
US20200204624A1 (en) * | 2017-07-28 | 2020-06-25 | Beijing Jingdong Shangke Information Technology Co., Ltd. | Data processing system, method and apparatus |
-
2022
- 2022-09-29 CN CN202211201467.3A patent/CN115632987A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080320003A1 (en) * | 2007-06-25 | 2008-12-25 | Microsoft Corporation | Scaling network services using dns |
CN103458013A (en) * | 2013-08-21 | 2013-12-18 | 成都云鹰科技有限公司 | Streaming media server cluster load balancing system and balancing method |
CN106713499A (en) * | 2017-01-23 | 2017-05-24 | 天地融科技股份有限公司 | Load balancing method, equipment and system |
US20200204624A1 (en) * | 2017-07-28 | 2020-06-25 | Beijing Jingdong Shangke Information Technology Co., Ltd. | Data processing system, method and apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111464648B (en) | Distributed local DNS system and domain name query method | |
US7430611B2 (en) | System having a single IP address associated with communication protocol stacks in a cluster of processing systems | |
US6996617B1 (en) | Methods, systems and computer program products for non-disruptively transferring a virtual internet protocol address between communication protocol stacks | |
US6954784B2 (en) | Systems, method and computer program products for cluster workload distribution without preconfigured port identification by utilizing a port of multiple ports associated with a single IP address | |
US7702791B2 (en) | Hardware load-balancing apparatus for session replication | |
US6941384B1 (en) | Methods, systems and computer program products for failure recovery for routed virtual internet protocol addresses | |
US6470389B1 (en) | Hosting a network service on a cluster of servers using a single-address image | |
US7409420B2 (en) | Method and apparatus for session replication and failover | |
US7609619B2 (en) | Active-active data center using RHI, BGP, and IGP anycast for disaster recovery and load distribution | |
US9379968B2 (en) | Redundancy support for network address translation (NAT) | |
EP1653711B1 (en) | Fault tolerant network architecture | |
CN108234191A (en) | The management method and device of cloud computing platform | |
EP1415236B1 (en) | Method and apparatus for session replication and failover | |
US20030018927A1 (en) | High-availability cluster virtual server system | |
KR20120019462A (en) | Load balancing across layer-2 domains | |
CN109743197B (en) | Firewall deployment system and method based on priority configuration | |
CN112217843B (en) | Service unit switching method, system and equipment | |
AU2002329602A1 (en) | Method and apparatus for session replication and failover | |
US11349706B2 (en) | Two-channel-based high-availability | |
US20080276002A1 (en) | Traffic routing based on client intelligence | |
US20060013227A1 (en) | Method and appliance for distributing data packets sent by a computer to a cluster system | |
CN115632987A (en) | Load balancing method based on DNS and route issuing control | |
JP2003234752A (en) | Load distribution method using tag conversion, tag converter and load distribution controller | |
US10432452B2 (en) | System and method for enabling application-to-application communication in an enterprise computer system | |
US9118581B2 (en) | Routing network traffic |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |