[go: up one dir, main page]

WO2017162184A1 - 数据中心间的业务流量控制方法、装置及系统 - Google Patents

数据中心间的业务流量控制方法、装置及系统 Download PDF

Info

Publication number
WO2017162184A1
WO2017162184A1 PCT/CN2017/077807 CN2017077807W WO2017162184A1 WO 2017162184 A1 WO2017162184 A1 WO 2017162184A1 CN 2017077807 W CN2017077807 W CN 2017077807W WO 2017162184 A1 WO2017162184 A1 WO 2017162184A1
Authority
WO
WIPO (PCT)
Prior art keywords
data center
load balancing
balancing device
standby
service
Prior art date
Application number
PCT/CN2017/077807
Other languages
English (en)
French (fr)
Inventor
陈子昂
吴佳明
吴昊
陈卓
王倩
雷海生
董广涛
刘旺旺
李鹏飞
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Priority to EP17769461.9A priority Critical patent/EP3435627A4/en
Publication of WO2017162184A1 publication Critical patent/WO2017162184A1/zh
Priority to US16/141,844 priority patent/US20190028538A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/22Arrangements for detecting or preventing errors in the information received using redundant apparatus to increase reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0668Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1034Reaction to server failures by a load balancer

Definitions

  • the present invention relates to the field of load balancing technologies, and in particular, to a method, device and system for controlling traffic flow between data centers.
  • the Internet data center is a network-based, part of the basic resources of the Internet network. It provides a high-end data transmission service and high-speed access service. It not only provides a fast and secure network, but also provides network management solutions such as server supervision and traffic monitoring. Service.
  • the embodiment of the invention provides a method, a device and a system for controlling traffic flow between data centers, so as to at least solve the technical problem of interruption of Internet service in an Internet data center in the prior art when the data center is faulty or unavailable.
  • a method for controlling traffic flow between data centers including: a primary data center and a standby data center having a mutual standby relationship, and at least one of the primary data center and the standby data center are respectively deployed.
  • a load balancing device wherein, when the primary data center is switched to the standby data center, the service traffic transmitted to the primary data center is directed to the standby data center, and the load balancing device of the standby data center allocates the service traffic.
  • a service flow control system between data centers including: a primary data center, at least one load balancing device configured to receive and forward service traffic; and a data center, and The primary data center has a mutual standby relationship and at least one load balancing device is deployed.
  • the service traffic is directed to the standby data center, and the load balancing device of the standby data center is used. Assign traffic to traffic.
  • a service flow control device between data centers including: a control module, configured to transmit to a primary data center if the primary data center is switched to a standby data center
  • the service traffic is directed to the standby data center, and the load balancing device of the standby data center allocates service traffic.
  • the primary data center and the standby data center have a mutual standby relationship, and the primary data center and the standby data center respectively deploy at least one load. Equalization equipment.
  • the primary data center and the standby data center have a mutual standby relationship, and at least one load balancing device is deployed in the primary data center and the standby data center respectively.
  • the solution is The traffic that is transmitted to the primary data center can be forwarded to the standby data center.
  • the load balancing device of the standby data center allocates service traffic to implement service traffic migration.
  • the primary data center and the standby data center have a mutual standby relationship.
  • the data in the primary data center can be synchronized to the standby data center in real time.
  • the primary data center fails or becomes unavailable, the primary data center can be switched to the standby data center.
  • the load balancing device in the standby data center performs traffic distribution. Therefore, with the solution provided by the embodiment of the present application, it can be realized that once a data center (for example, a primary data center) has a catastrophic failure, the service traffic can be quickly migrated to another data center (for example, a standby data center), in another The data center restores service functions in a short period of time, thereby reducing user waiting time, enhancing network data processing capabilities, and improving network flexibility and availability.
  • the solution provided by the present invention solves the technical problem of the interruption of Internet service in the Internet data center in the prior art when the data center is faulty or unavailable.
  • FIG. 1 is a block diagram showing the hardware structure of a computer terminal for controlling a service flow between data centers according to Embodiment 1 of the present application;
  • FIG. 2 is a flowchart of a method for controlling traffic flow between data centers according to Embodiment 1 of the present application;
  • FIG. 3 is a schematic diagram of service traffic guidance between data centers according to Embodiment 1 of the present application.
  • FIG. 4 is a schematic diagram of a four-layer load balancing deployment manner according to Embodiment 1 of the present application;
  • FIG. 5 is a schematic diagram of a seven-layer load balancing deployment manner according to Embodiment 1 of the present application.
  • FIG. 6 is an interaction diagram of an optional service flow control method between data centers according to Embodiment 1 of the present application.
  • FIG. 7 is a schematic diagram of a service flow control device between data centers according to Embodiment 2 of the present application.
  • FIG. 8 is a schematic diagram of an optional service flow control device between data centers according to Embodiment 2 of the present application.
  • FIG. 9 is a schematic diagram of an optional service flow control device between data centers according to Embodiment 2 of the present application.
  • FIG. 10 is a schematic diagram of an optional service flow control device between data centers according to Embodiment 2 of the present application.
  • FIG. 11 is a schematic diagram of an optional service flow control device between data centers according to Embodiment 2 of the present application.
  • FIG. 12 is a schematic diagram of a service flow control system between data centers according to Embodiment 3 of the present application.
  • FIG. 13 is a schematic diagram of an optional service flow control system between data centers according to Embodiment 3 of the present application.
  • FIG. 14 is a schematic diagram of an optional service flow control system between data centers according to Embodiment 3 of the present application.
  • FIG. 15 is a schematic diagram of an optional service flow control system between data centers according to Embodiment 3 of the present application.
  • 16 is a schematic diagram of an optional service flow control system between data centers according to Embodiment 3 of the present application.
  • FIG. 17 is a structural block diagram of a computer terminal according to Embodiment 4 of the present application.
  • IDC Internet data center, the abbreviation of Internet Data Center, is the use of existing Internet communication links and bandwidth resources by the telecommunications department to establish a standardized telecommunications professional-grade computer room environment, providing server hosting, leasing and related value-added for enterprises and governments. Full service.
  • SLB server load balancing, short for Server Load Balance, by setting a virtual service address (IP), virtualizing multiple Elastic Compute Service (ECS) resources in the same region into a high-performance, high-availability
  • IP virtual service address
  • ECS Elastic Compute Service
  • Border Gateway Protocol short for Border Gateway Protocol, used to exchange routing information between different autonomous systems (AS).
  • AS autonomous systems
  • Service migration refers to the migration of a service from one physical DC to another physical DC. During the migration process, all resources of the entire service are migrated together.
  • URL Uniform Resource Locator, short for Uniform Resource Locator, is a concise representation of the location and access method of resources that can be obtained from the Internet, and is the address of standard resources on the Internet.
  • LVS Four-layer load balancing open source software, a load balancing software implemented under the Linux platform.
  • OSPF runs between the LVS and the uplink switch.
  • the uplink switch distributes the data to the LVS cluster through the ECMP equal-cost route.
  • the LVS cluster then forwards the data to the service server.
  • an embodiment of a method for controlling traffic flow between data centers is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings may be implemented in a computer system such as a set of computer executable instructions. The rows, and although the logical order is shown in the flowchart, in some cases the steps shown or described may be performed in a different order than the ones described herein.
  • FIG. 1 is a hardware structural block diagram of a computer terminal for controlling a service flow between data centers according to Embodiment 1 of the present application.
  • computer terminal 10 may include one or more (only one shown) processor 102 (processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA)
  • processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA)
  • a memory 104 for storing data
  • a transmission module 106 for communication functions.
  • computer terminal 10 may also include more or fewer components than those shown in FIG. 1, or have a different configuration than that shown in FIG.
  • the memory 104 can be used to store software programs and modules of application software, such as program instructions/modules corresponding to the service flow control method between data centers in the embodiment of the present application, and the processor 102 runs the software programs and modules stored in the memory 104. Thus, various functional applications and data processing are performed, that is, the above-described service flow control method between data centers is implemented.
  • Memory 104 may include high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 104 may further include memory remotely located relative to processor 102, which may be coupled to computer terminal 10 via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Transmission device 106 is for receiving or transmitting data via a network.
  • the network specific examples described above may include a wireless network provided by a communication provider of the computer terminal 10.
  • the transmission device 106 includes a Network Interface Controller (NIC) that can be connected to other network devices through a base station to communicate with the Internet.
  • the transmission device 106 can be a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • FIG. 2 is a flowchart of a method for controlling traffic flow between data centers according to Embodiment 1 of the present application.
  • the method shown in FIG. 2 may include the following steps:
  • Step S22 A primary data center and a standby data center having a mutual standby relationship, wherein at least one load balancing device is deployed in the primary data center and the standby data center, where the primary data center switches to the standby data center, and is transmitted to The service traffic of the primary data center is directed to the standby data center, and the load balancing device of the standby data center allocates service traffic.
  • the primary data center and the standby data center in the foregoing steps may be two data centers (IDC rooms) in the same region, and the data center with high priority in the data center cluster may be set as the primary data center.
  • the data center with low priority is set as the standby data center.
  • the data in the primary data center can be migrated to the backup data center.
  • the storage device in the primary data center communicates with the storage device in the standby data center, and the data in the storage device in the primary data center is used. Real-time synchronization to the storage device in the standby data center.
  • the standby data center creates a corresponding service network and service server according to the network information of the service server, network device configuration information, and service server information; and directs the service traffic transmitted to the primary data center to the standby data.
  • the specific method is that the load balancing device of the primary data center can translate the service traffic sent by the user to the address and port, and send the service traffic sent by the user to the load balancing device in the standby data center.
  • the load balancing device can be based on the load balancing algorithm. , forward business traffic to the target server.
  • FIG. 3 is a schematic diagram of service flow guidance between data centers according to the first embodiment of the present application.
  • the Internet service in the AID IDC can advertise the IP addresses of the Internet services in the IDC in the same area at the same time in two computer rooms (BGP routing), as shown in Figure 3.
  • the BGP route of the SLB router of the site A is declared as: XYZ0/24
  • the BGP route of the SLB router of the site B is declared as: XYZ0/25, XYZ128/25
  • the data center with high priority is the main data center (may be It is the SLB router of the site A in FIG. 3)
  • the data center with low priority is the standby data center (which may be the SLB router of the site B in FIG. 3), and the primary data center and the standby data center implement the mutual standby relationship. Under normal circumstances, the 1/2 VIP high-priority operation runs under different IDCs.
  • the load balancing device of the standby data center allocates the received service traffic, and distributes the service traffic to the corresponding service server through the load balancing algorithm.
  • the primary data center and the standby data center have a mutual standby relationship, and at least one load balancing device is deployed in the primary data center and the standby data center respectively, and the primary data center is switched to the standby data center.
  • the traffic of the service to the primary data center is forwarded to the standby data center, and the load balancing device of the standby data center allocates service traffic to implement service traffic migration.
  • the primary data center and the standby data center have a mutual standby relationship.
  • the data in the primary data center can be synchronized to the standby data center in real time.
  • the primary data center fails or becomes unavailable, the primary data center can be switched to the standby data center.
  • the load balancing device in the standby data center performs traffic distribution. Therefore, with the solution provided by the embodiment of the present application, it can be realized that once a data center (for example, a primary data center) has a catastrophic failure, the service traffic can be quickly migrated to another data center (for example, a standby data center), in another Data center resumes service in a short time It can reduce the user's waiting time, enhance the network data processing capability, and improve the flexibility and usability of the network.
  • Embodiment 1 solves the technical problem of the interruption of the Internet service in the Internet data center in the prior art when the data center is faulty or unavailable.
  • the foregoing method may further include the following steps: Step S24: monitoring the primary data center by using an intermediate router, and if the primary data center is in an unavailable state, the primary data center is switched to the standby data center.
  • the unavailable state includes at least any one of the following states: a power-off state, a fault state, an intrusion state, and an overflow state.
  • the data center switching instruction may be issued, and the storage device of the primary data center may adjust its own priority after receiving the data center switching instruction. After the data center switching instruction is received, the storage device of the standby data center can raise its own priority and switch the primary data center to the standby data center.
  • the above embodiments of the present application are described in detail by taking the application scenario shown in FIG. 3 as an example.
  • the "high priority" data center (which can be the SLB router of Site A in Figure 3) provides services to customers. Once the data center is unavailable, the border routing protocol BGP will be very fast (most In the case of a poor condition, within 180 seconds, within 30 seconds under normal conditions, convergence, at this time, the "low priority" data center will replace the faulty (high priority) data center and continue to serve the user.
  • step S24 when the primary data center is unavailable, the primary data center is switched to the standby data center, so that when the primary data center is faulty or unavailable, the data center is switched to the standby data center, and the standby data center provides services for the user. .
  • the method may further include the following steps: Step S26, the primary data center and the standby data center synchronize data in real time.
  • the primary data center and the standby data center have a standby relationship, and the data of the primary data center can be backed up to the standby data center in real time, so that when the primary data center (or the standby data center) fails, the data center (or the primary data center) The data center can take over the application in a short time, thus ensuring the continuity of the application.
  • the load balancing device in the standby data center can allocate traffic to the primary data center after the primary data center is switched to the standby data center. Therefore, the primary data center and the backup need to be ensured.
  • Data synchronization in the data center can be performed on the storage devices in the primary data center and the storage devices in the standby data center. Communication, real-time synchronization of data between the primary data center and the standby data center to ensure data synchronization between the two data centers.
  • the primary data center (which may be the SLB router of Site A in Figure 3) and the standby data center (which may be the SLB router of Site B in Figure 3) can communicate to synchronize the data in the two storage devices in real time and can When the data center is switched to the standby data center, the data in the primary data center is backed up to the standby data center to ensure that the data in the standby data center is synchronized with the data in the primary data center.
  • the primary data center and the standby data center can synchronize data in real time, so that after the primary data center is switched to the standby data center, the load balancing device of the standby data center can transmit the service to the primary data center. Traffic is allocated to ensure the availability of user service services.
  • the load balancing device may include any one or more of the following types: a three-layer load balancing device, a four-layer load balancing device, a five-layer load balancing device, a six-layer load balancing device, and a seven-layer load balancing device. device.
  • the three-layer load balancing device in the foregoing step is based on the IP address, and can receive the request through a virtual IP address, and then allocate the request to the real IP address; the four-layer load balancing device can pass the virtual IP geology and the port based on the IP address and port.
  • the port receives the request and then assigns it to the real server; the seven-layer load balancing device based on the application layer information such as the URL, can receive the request through the virtual URL geological or host name, and then assign it to the real server.
  • a Layer 4 load balancing device can determine the traffic that needs to be load balanced by issuing a Layer 3 IP address (VIP) and then adding a Layer 4 port number. Load balancing processing is required. The traffic is forwarded to the background server, and the identification information of the forwarded background server is saved, so that all subsequent traffic is processed by the same server.
  • VIP Layer 3 IP address
  • the seven-layer load balancing device may add characteristics of the application layer, such as a URL address, an HTTP protocol, or a cookie, based on the four-layer load balancing device to determine that load balancing is required. Processed traffic.
  • step S22 the load balancing device of the standby data center allocates the service traffic, which may include the following steps:
  • Step S222 The four-layer load balancing device of the standby data center selects the target server according to the scheduling policy.
  • Step S224 the four-layer load balancing device allocates service traffic to the target server through the LVS cluster.
  • the scheduling policy in the foregoing steps may include a polling manner, a URL scheduling policy, a URL hash scheduling policy, or a consistent hash scheduling policy, but is not limited thereto.
  • Four-layer load balancing device can pass ECMP equivalent route The data traffic is sent to the LVS cluster, which is then forwarded to the target server by the LVS cluster.
  • the four-layer load balancing device is connected to multiple servers, and after receiving the request message sent by the user of the first network, the request message can be sent to the address (including the source address and the target address).
  • the port is switched, the request packet of the second network is generated, and the target server is determined from the plurality of servers by using the scheduling policy, and the request packet of the second network is sent by the LVS cluster to the corresponding target server.
  • the target server may return the response packet of the returned second network to the four-layer load balancing device by using the source address mapping manner, and the response packet of the second layer after receiving the response packet of the second network by the four-layer load balancing device.
  • the packet is translated by the address, and the response packet of the first network is generated, and the response packet of the first network is returned to the user.
  • request packet of the first network and the response packet of the first network belong to the same network type
  • request packet of the second network and the response packet of the second network belong to the same network.
  • Type of message
  • FIG. 4 is a schematic diagram of a four-layer load balancing deployment manner according to the first embodiment of the present application.
  • the foregoing embodiment of the present application is described in detail by taking an application scenario as shown in FIG. 4 as an example.
  • the virtual machine VM represents the corresponding user instance.
  • the proxy server proxy represents the proxy component of the SLB and can represent a four-layer load balancing device.
  • the SLB in the data center can conduct business checks through health checks. Normally, a monitored traffic is forwarded only through one data center. In the case where the primary data center (which may be the site A in FIG. 4) is switched to the standby data center (which may be the site B in FIG. 4), the four-layer load balancing device of the standby data center selects the target server according to the scheduling policy. And distribute traffic to the target server through the LVS cluster.
  • the load balancing device can determine the target server through the scheduling policy and allocate the target server to the target server through the LVS cluster, thereby ensuring the availability of the user service and improving the stability of the load balancing service.
  • the scheduling policy includes: determining a target server by checking an online state or a resource usage rate in the plurality of back-end service servers, wherein the scheduling policy is configured by using a control server of the standby data center, in any one
  • the LVS cluster generates cross-flow when forwarding service traffic among multiple back-end service servers.
  • the online status of the end service server determines whether there is a faulty server in the service server, and determines the optimal target server by checking the resource usage rate of the plurality of back end service servers to determine the number of service requests processed by each service server.
  • the above embodiment of the present application is described in detail by taking an application scenario as shown in FIG. 4 as an example.
  • the SLB public cloud 4 layer user In the 4 layer area, the VM VM can represent the corresponding user instance, and all its instances are visible to all data centers. Therefore, the LVS cluster will have traffic crossover when forwarding service traffic.
  • the target server can be determined by checking the online status or resource usage rate in multiple back-end service servers, so that multiple back-end service servers can work together well to eliminate or avoid the existing network load distribution. Uneven, data traffic congestion has a long reaction time bottleneck.
  • step S22 the load balancing device of the standby data center allocates the service traffic, which may include the following steps:
  • the scheduling policy in the foregoing step may be the same as or different from the scheduling policy of the four-layer load balancing device.
  • the Layer 7 load balancing device can send data traffic to the LVS cluster through ECMP equal-cost routing, and then forward the LVS cluster to the target server.
  • the seven-layer load balancing device is connected to multiple servers, and after receiving the request message sent by the user of the first network, the connection between the proxy server and the client may be established, and the client is sent to the client.
  • the packet of the real application layer content is then determined according to a specific field in the packet (for example, the header of the HTTP packet), and then the target server is determined according to the scheduling policy.
  • the load balancing device is more similar to a proxy server in this case. Load balancing and front-end clients and back-end servers establish TCP connections. Therefore, the seven-layer load balancing device has higher requirements and lower processing power than the four-layer load balancing device.
  • FIG. 5 is a schematic diagram of a seven-layer load balancing deployment manner according to the first embodiment of the present application.
  • the proxy server proxy represents the SLB proxy component and can represent a seven-layer load balancing device.
  • the SLB in the data center can conduct business checks through health checks. Normally, a monitored traffic is forwarded only through one data center.
  • the primary data center which may be the site A in FIG. 5
  • the standby data center which may be the site B in FIG. 5
  • the seven-layer load balancing device of the standby data center selects the target server according to the scheduling policy. And distribute traffic to the target server through the LVS cluster.
  • the load balancing device can determine the target server by using the scheduling policy, and allocate the target server to the target server through the LVS cluster, thereby ensuring the availability of the user service, avoiding application layer failure, and improving the stability of the load balancing service. Sex.
  • the scheduling policy includes: checking the online status in multiple backend service servers Or the resource usage rate to determine the target server.
  • the scheduling policy is configured by the control server of the standby data center.
  • the LVS cluster is allocated to each. At least one back-end service server with a connection relationship of the LVS is different, so that cross-flow is not generated when forwarding service traffic in multiple back-end service servers.
  • the online status of the end service server determines whether there is a faulty server in the service server, and determines the optimal target server by checking the resource usage rate of the plurality of back end service servers to determine the number of service requests processed by each service server.
  • the proxy server proxy represents the SLB proxy component, and all its instances are visible to all data centers. Therefore, when the LVS cluster forwards traffic, traffic crossover occurs in the data center. The proxy component is only visible to the SLB of this data center. Avoid 7-layer user traffic crossing in the L4 area, adding unnecessary delay.
  • the target server can be determined by checking the online status or resource usage rate in multiple back-end service servers, so that multiple back-end service servers can work together well to eliminate or avoid the existing network load distribution. Uneven, data traffic congestion has a long reaction time bottleneck.
  • control server of the data center configures the RDS database corresponding to the current data center, and the RDS database does not generate the service traffic when the RDS database only allows the current standby data center to access. Cross flow.
  • the above embodiment of the present application is described in detail by taking an application scenario as shown in FIG. 5 as an example.
  • the virtual machine VM represents the RDS database.
  • RDS is sensitive to delay. Therefore, in the configuration, the data center id of the database is specified.
  • the SLB configuration system ensures that it is only visible to the SLB of the data center, avoiding traffic crossover and reducing unnecessary delay.
  • an optional service flow control method between data centers is provided, and the method may include the following steps S61 to S64:
  • step S61 the main data center 121 and the standby data center 123 synchronize data in real time.
  • the primary data center and the standby data center can have a mutual standby relationship, and the data of the primary data center can be backed up in the standby data center in real time.
  • step S62 the intermediate router 131 monitors the status of the primary data center 121.
  • the primary data center is switched to the standby data center.
  • the intermediate router when the intermediate router detects that the primary data center is in a power-off state, a fault state, an intrusion state, or an overflow state, determining that the primary data center is in an unavailable state, lowering the priority of the primary data center, and preparing the data center The priority is raised to switch the primary data center to the standby data center.
  • step S63 the intermediate router 131 directs the traffic transmitted to the primary data center to the standby data center 123.
  • the load balancing device in the primary data center can translate the service traffic sent by the user to the address and port, and send the service traffic sent by the user to the load balancing device in the standby data center.
  • step S64 the load balancing device of the standby data center 123 allocates service traffic.
  • the load balancing device may be: a three-layer load balancing device, a four-layer load balancing device, a five-layer load balancing device, a six-layer load balancing device, and a seven-layer load balancing device.
  • the load balancing device can select the target server according to the scheduling policy, and allocate the service traffic to the target server through the LVS cluster.
  • the primary data center and the standby data center can synchronize data in real time.
  • the primary data center is switched to the standby data center, and the traffic transmitted to the primary data center is directed to the standby data center.
  • the load balancing device of the standby data center allocates service traffic, so that when the entire data center is faulty or unavailable, the Internet service in the IDC still has the ability to resume service in a short period of time.
  • the service flow control method between data centers can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware. However, in many cases the former is a better implementation.
  • the technical solution of the present application which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present application.
  • a data center for a service flow control method between data centers is also provided.
  • the inter-service traffic control device includes: a control module 71.
  • the control module 71 is configured to: when the primary data center is switched to the standby data center, direct the service traffic transmitted to the primary data center to the standby data center, and the load balancing device of the standby data center allocates the service traffic.
  • the primary data center and the standby data center have a mutual standby relationship. At least one load balancing device is deployed in the primary data center and the standby data center.
  • the primary data center and the standby data center in the foregoing steps may be two data centers (IDC rooms) in the same region, and the data center with high priority in the data center cluster may be set as the primary data center.
  • the data center with low priority is set as the standby data center.
  • the data in the primary data center can be migrated to the backup data center.
  • the storage device in the primary data center communicates with the storage device in the standby data center, and the data in the storage device in the primary data center is used. Real-time synchronization to the storage device in the standby data center.
  • the standby data center creates a corresponding service network and service server according to the network information of the service server, network device configuration information, and service server information; and directs the service traffic transmitted to the primary data center to the standby data.
  • the specific method is that the load balancing device of the primary data center can translate the service traffic sent by the user to the address and port, and send the service traffic sent by the user to the load balancing device in the standby data center.
  • the load balancing device can be based on the load balancing algorithm. , forward business traffic to the target server.
  • control module 71 corresponds to the step S22 in the embodiment 1, and the module is the same as the example and the application scenario implemented by the corresponding steps, but is not limited to the content disclosed in the second embodiment. It should be noted that the above module can be operated as part of the device in the computer terminal 10 provided in the first embodiment.
  • the primary data center and the standby data center have a mutual standby relationship, and at least one load balancing device is deployed in the primary data center and the standby data center respectively, and the primary data center is switched to the standby data center.
  • the traffic of the service to the primary data center is forwarded to the standby data center, and the load balancing device of the standby data center allocates service traffic to implement service traffic migration.
  • the primary data center and the standby data center have a mutual standby relationship.
  • the data in the primary data center can be synchronized to the standby data center in real time.
  • the primary data center fails or becomes unavailable, the primary data center can be switched to the standby data center.
  • the load balancing device in the standby data center performs traffic distribution. Therefore, with the solution provided by the embodiment of the present application, it can be realized that once a data center (for example, a primary data center) has a catastrophic failure, the service traffic can be quickly migrated to another data center (for example, a standby data center), in another The data center restores service functions in a short period of time, thereby reducing user waiting time, enhancing network data processing capabilities, and improving network flexibility and availability.
  • Embodiment 2 solves the technical problem of the interruption of the Internet service in the Internet data center in the prior art when the data center is faulty or unavailable.
  • the foregoing apparatus may further include: a switching module 81.
  • the switching module 81 is configured to monitor the primary data center, and if the primary data center is in an unavailable state, the primary data center is switched to the standby data center.
  • the unavailable state includes at least any one of the following states: a power-off state, a fault state, an intrusion state, and an overflow state.
  • the above-mentioned switching module 81 corresponds to the step S24 in the embodiment 1, and the module is the same as the example and the application scenario implemented by the corresponding steps, but is not limited to the content disclosed in the second embodiment. It should be noted that the above module can be operated as part of the device in the computer terminal 10 provided in the first embodiment.
  • the primary data center when the primary data center is unavailable, the primary data center is switched to the standby data center, so that when the primary data center is faulty or unavailable, the data center is switched to the standby data center, and the standby data center provides services for the user.
  • the foregoing apparatus may further include: a setting module 91 and a synchronization module 93.
  • the setting module 91 is configured to set a data center with a high priority in the data center cluster as the primary data center, and set a data center with a lower priority as the standby data center; the synchronization module 93 is used for the primary data center and The data center synchronizes data in real time.
  • the primary data center and the standby data center have a standby relationship, and the data of the primary data center can be backed up to the standby data center in real time, so that when the primary data center (or the standby data center) fails, the data center (or the primary data center) The data center can take over the application in a short time, thus ensuring the continuity of the application.
  • the foregoing synchronization module 93 corresponds to the step S26 in the embodiment 1, and the module is the same as the example and the application scenario implemented by the corresponding steps, but is not limited to the content disclosed in the second embodiment. It should be noted that the above module can be operated as part of the device in the computer terminal 10 provided in the first embodiment.
  • the primary data center and the standby data center can synchronize data in real time, so that after the primary data center is switched to the standby data center, the load balancing device of the standby data center can perform the service traffic transmitted to the primary data center. Assignment to ensure the availability of user business services.
  • the load balancing device includes any one or more of the following types: a three-layer load balancing device, a four-layer load balancing device, a five-layer load balancing device, a six-layer load balancing device, and a seven-layer load balancing device. .
  • the three-layer load balancing device in the foregoing step is based on the IP address, and can receive the request through a virtual IP address, and then allocate the request to the real IP address; the four-layer load balancing device can pass the virtual IP geology and the port based on the IP address and port.
  • the port receives the request and then assigns it to the real server; the seven-layer load balancing device is based on Application layer information such as URLs can be received via virtual URL geology or hostname and then assigned to the real server.
  • the control module 71 may further include: a first selection sub-module 101 and a first distribution sub-module 103.
  • the first selection sub-module 101 is configured to select a target server according to a scheduling policy; the first distribution sub-module 103 is configured to allocate service traffic to the target server through the LVS cluster.
  • the scheduling policy in the foregoing steps may include a polling manner, a URL scheduling policy, a URL hash scheduling policy, or a consistent hash scheduling policy, but is not limited thereto.
  • the Layer 4 load balancing device can send data traffic to the LVS cluster through ECMP equal-cost routing, and then forward the LVS cluster to the target server.
  • first selection sub-module 101 and the first distribution sub-module 103 correspond to the steps S222 to S224 in the first embodiment, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps. However, it is not limited to the contents disclosed in the above second embodiment. It should be noted that the above module can be operated as part of the device in the computer terminal 10 provided in the first embodiment.
  • the load balancing device can determine the target server through the scheduling policy, and allocate the target server to the target server through the LVS cluster, thereby ensuring the availability of the user service and improving the stability of the load balancing service.
  • the scheduling policy includes: determining a target server by checking an online state or a resource usage rate in the plurality of back-end service servers, wherein the scheduling policy is configured by using a control server of the standby data center, in any one
  • the LVS cluster generates cross-flow when forwarding service traffic among multiple back-end service servers.
  • the target server can be determined by checking the online status or resource usage rate in multiple back-end service servers, so that multiple back-end service servers can work together well to eliminate or avoid the existing network load distribution. Uneven, data traffic congestion has a long reaction time bottleneck.
  • the control module 71 may further include: a second selection sub-module 111 and a second distribution sub-module 113.
  • the second selection sub-module 111 is configured to select a target server according to a scheduling policy; the second distribution sub-module 113 is configured to allocate service traffic to the target server through the LVS cluster.
  • the scheduling policy in the foregoing step may be the same as or different from the scheduling policy of the four-layer load balancing device.
  • the Layer 7 load balancing device can send data traffic to the LVS cluster through ECMP equal-cost routing, and then forward the LVS cluster to the target server.
  • the load balancing device is more similar to a proxy server in this case. Load The balanced and front-end clients and the back-end servers establish TCP connections, respectively. Therefore, the seven-layer load balancing device has higher requirements and lower processing power than the four-layer load balancing device.
  • the foregoing second selection sub-module 111 and the second distribution sub-module 113 correspond to steps S226 to S228 in the first embodiment, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps. However, it is not limited to the contents disclosed in the above second embodiment. It should be noted that the above module can be operated as part of the device in the computer terminal 10 provided in the first embodiment.
  • the load balancing device can determine the target server through the scheduling policy, and allocate the target server to the target server through the LVS cluster, thereby ensuring the availability of the user service, avoiding application layer failure, and improving the stability of the load balancing service.
  • the scheduling policy includes: determining a target server by checking an online state or a resource usage rate in the multiple back-end service servers, where the scheduling policy is configured by using a control server of the standby data center, in multiple If the back-end service group only allows the current standby data center to access, at least one back-end service server with a connection relationship assigned to each LVS in the LVS cluster is different, so that the service traffic is forwarded in multiple back-end service servers. There is no cross flow.
  • the target server can be determined by checking the online status or resource usage rate in multiple back-end service servers, so that multiple back-end service servers can work together well to eliminate or avoid the existing network load distribution. Uneven, data traffic congestion has a long reaction time bottleneck.
  • control server of the data center configures the RDS database corresponding to the current data center, and the RDS database does not generate the service traffic when the RDS database only allows the current standby data center to access. Cross flow.
  • a service flow control system between data centers is also provided.
  • the system may include: a primary data center 121 and a standby data center 123.
  • the primary data center 121 deploys at least one load balancing device for receiving and forwarding service traffic; the standby data center 123 has a standby relationship with the primary data center 121, and at least one load balancing device is deployed, wherein, in the primary data center When the data center is switched to the standby data center, the service traffic is directed to the standby data center, and the service traffic is allocated by the load balancing device of the standby data center.
  • the primary data center and the standby data center in the foregoing steps may be two data centers (IDC rooms) in the same region, and the data center with high priority in the data center cluster may be set as the primary data center.
  • the data center with low priority is set as the standby data center.
  • the data of the primary data center is migrated to the backup data center, and the storage device of the primary data center communicates with the storage device of the standby data center to synchronize the data in the storage device of the primary data center to the storage device of the standby data center.
  • the data center creates a corresponding service network and service server according to the network information of the service server, the network device configuration information, and the service server information; and directs the service traffic transmitted to the primary data center to the standby data center, and the specific method is the load of the primary data center.
  • the equalization device can translate the service traffic sent by the user to the address and port, and send the service traffic sent by the user to the load balancing device in the standby data center.
  • the load balancing device can forward the service traffic to the target server according to the load balancing algorithm.
  • the Internet service in the AID IDC can advertise the IP addresses of the Internet services in the IDC in the same area at the same time in two computer rooms (BGP routing), as shown in Figure 3.
  • the BGP route of the SLB router of the site A is declared as: XYZ0/24
  • the BGP route of the SLB router of the site B is declared as: XYZ0/25, XYZ128/25
  • the data center with high priority is the main data center (may be It is the SLB router of the site A in FIG.
  • the data center with low priority is the standby data center (which may be the SLB router of the site B in FIG. 3), and the primary data center and the standby data center implement the mutual standby relationship. Under normal circumstances, the 1/2 VIP high-priority operation runs under different IDCs.
  • the load balancing device of the standby data center allocates the received service traffic, and distributes the service traffic to the corresponding service server through the load balancing algorithm.
  • the primary data center and the standby data center have a mutual standby relationship, and at least one load balancing device is deployed in the primary data center and the standby data center respectively, and the primary data center is switched to the standby data center.
  • the traffic of the service to the primary data center is forwarded to the standby data center, and the load balancing device of the standby data center allocates service traffic to implement service traffic migration.
  • the primary data center and the standby data center have a mutual standby relationship.
  • the data in the primary data center can be synchronized to the standby data center in real time.
  • the primary data center fails or becomes unavailable, the primary data center can be switched to the standby data center.
  • the load balancing device in the standby data center performs traffic distribution. Therefore, with the solution provided by the embodiment of the present application, it can be realized that once a data center (for example, a primary data center) has a catastrophic failure, the service traffic can be quickly migrated to another data center (for example, a standby data center), in another The data center restores service functions in a short period of time, thereby reducing user waiting time, enhancing network data processing capabilities, and improving network flexibility and availability.
  • Embodiment 3 solves the technical problem of the interruption of the Internet service in the Internet data center in the prior art when the data center is faulty or unavailable.
  • the above system further includes: an intermediate router 131.
  • the intermediate router 131 is configured to monitor the primary data center. If the primary data center is in an unavailable state, the primary data center is switched to the standby data center.
  • the unavailable state includes at least any one of the following states: a power-off state, a fault state, an intrusion state, and an overflow state.
  • the data center switching instruction may be issued, and the storage device of the primary data center may adjust its own priority after receiving the data center switching instruction. After the data center switching instruction is received, the storage device of the standby data center can raise its own priority and switch the primary data center to the standby data center.
  • the above embodiments of the present application are described in detail by taking the application scenario shown in FIG. 3 as an example.
  • the "high priority" data center (which can be the SLB router of Site A in Figure 3) provides services to customers. Once the data center is unavailable, the border routing protocol BGP will be very fast (most In the case of a poor condition, within 180 seconds, within 30 seconds under normal conditions, convergence, at this time, the "low priority" data center will replace the faulty (high priority) data center and continue to serve the user.
  • the primary data center when the primary data center is unavailable, the primary data center is switched to the standby data center, so that when the primary data center is faulty or unavailable, the data center is switched to the standby data center, and the standby data center provides services for the user.
  • the primary data center 121 is further configured to synchronize data with the standby data center in real time before the primary data center switches to the standby data center.
  • the primary data center and the standby data center have a standby relationship, and the data of the primary data center can be backed up to the standby data center in real time, so that when the primary data center (or the standby data center) fails, the data center (or the primary data center) The data center can take over the application in a short time, thus ensuring the continuity of the application.
  • the load balancing device in the standby data center can allocate traffic to the primary data center after the primary data center is switched to the standby data center. Therefore, the primary data center and the backup need to be ensured.
  • Data synchronization in the data center enables the storage device in the primary data center to communicate with the storage device in the standby data center to synchronize data in the primary data center and the standby data center in real time to ensure data synchronization between the two data centers.
  • the primary data center (which may be the SLB router of Site A in Figure 3) and the standby data center (which may be the SLB router of Site B in Figure 3) can communicate to synchronize the data in the two storage devices in real time and can When the data center is switched to the standby data center, the data of the primary data center is backed up to the standby data center to ensure the data center. The data is synchronized with the data in the primary data center.
  • the primary data center and the standby data center can synchronize data in real time, so that after the primary data center is switched to the standby data center, the load balancing device of the standby data center can allocate the service traffic transmitted to the primary data center. Ensure the availability of user business services.
  • the load balancing device includes any one or more of the following types: a three-layer load balancing device, a four-layer load balancing device, a five-layer load balancing device, a six-layer load balancing device, and a seven-layer load balancing device. .
  • the three-layer load balancing device in the foregoing step is based on the IP address, and can receive the request through a virtual IP address, and then allocate the request to the real IP address; the four-layer load balancing device can pass the virtual IP geology and the port based on the IP address and port.
  • the port receives the request and then assigns it to the real server; the seven-layer load balancing device based on the application layer information such as the URL, can receive the request through the virtual URL geological or host name, and then assign it to the real server.
  • a Layer 4 load balancing device can determine the traffic that needs to be load balanced by issuing a Layer 3 IP address (VIP) and then adding a Layer 4 port number. Load balancing processing is required. The traffic is forwarded to the background server, and the identification information of the forwarded background server is saved, so that all subsequent traffic is processed by the same server.
  • VIP Layer 3 IP address
  • the seven-layer load balancing device may add characteristics of the application layer, such as a URL address, an HTTP protocol, or a cookie, based on the four-layer load balancing device to determine that load balancing is required. Processed traffic.
  • the load balancing device includes: a four-layer load balancing device 141.
  • the four-layer load balancing device 141 is configured to select a target server according to a scheduling policy, and allocate service traffic to the target server through the LVS cluster.
  • the scheduling policy in the foregoing steps may include a polling manner, a URL scheduling policy, a URL hash scheduling policy, or a consistent hash scheduling policy, but is not limited thereto.
  • the Layer 4 load balancing device can send data traffic to the LVS cluster through ECMP equal-cost routing, and then forward the LVS cluster to the target server.
  • the four-layer load balancing device is connected to multiple servers, and after receiving the request message sent by the user of the first network, the request message can be sent to the address (including the source address and the target address). And the port is switched, the request packet of the second network is generated, and the target server is determined from the plurality of servers by using the scheduling policy, and the request packet of the second network is sent by the LVS cluster to the corresponding target server.
  • the target server can use the source address mapping mode to return the returned response packet of the second network to the four-layer load balancing device, and the four-layer load balancing device After receiving the response packet of the second network, the device responds to the address and port of the response packet of the second network, generates a response packet of the first network, and returns a response packet of the first network to the user.
  • request packet of the first network and the response packet of the first network belong to the same network type
  • request packet of the second network and the response packet of the second network belong to the same network.
  • Type of message
  • the above embodiment of the present application is described in detail by taking an application scenario as shown in FIG. 4 as an example.
  • the virtual machine VM represents the corresponding user instance.
  • the SLB in the data center can conduct business checks through health checks. Normally, a monitored traffic is forwarded only through one data center.
  • the primary data center which may be the site A in FIG. 4
  • the standby data center which may be the site B in FIG. 4
  • the four-layer load balancing device of the standby data center selects the target server according to the scheduling policy. And distribute traffic to the target server through the LVS cluster.
  • the load balancing device can determine the target server through the scheduling policy, and allocate the target server to the target server through the LVS cluster, thereby ensuring the availability of the user service and improving the stability of the load balancing service.
  • the load balancing device includes: a seven-layer load balancing device 151.
  • the seven-layer load balancing device 151 is configured to select a target server according to a scheduling policy, and allocate service traffic to the target server through the LVS cluster.
  • the scheduling policy in the foregoing step may be the same as or different from the scheduling policy of the four-layer load balancing device.
  • the Layer 7 load balancing device can send data traffic to the LVS cluster through ECMP equal-cost routing, and then forward the LVS cluster to the target server.
  • the seven-layer load balancing device is connected to multiple servers, and after receiving the request message sent by the user of the first network, the connection between the proxy server and the client may be established, and the client is sent to the client.
  • the packet of the real application layer content is then determined according to a specific field in the packet (for example, the header of the HTTP packet), and then the target server is determined according to the scheduling policy.
  • the load balancing device is more similar to a proxy server in this case. Load balancing and front-end clients and back-end servers establish TCP connections. Therefore, the seven-layer load balancing device has higher requirements and lower processing power than the four-layer load balancing device.
  • the proxy server proxy represents the proxy component of the SLB.
  • the SLB in the data center can conduct business checks through health checks. Normally, a monitored traffic is forwarded only through one data center.
  • the primary data center which may be the site A in FIG. 5
  • the standby data center which may be the site B in FIG. 5
  • the seven-layer load balancing device of the standby data center selects the target server according to the scheduling policy. , Traffic is distributed to the target server through the LVS cluster.
  • the load balancing device can determine the target server through the scheduling policy, and allocate the target server to the target server through the LVS cluster, thereby ensuring the availability of the user service, avoiding application layer failure, and improving the stability of the load balancing service.
  • the standby data center 121 further includes: a control server 161.
  • the control server 161 is connected to the four-layer load balancing device and the seven-layer load balancing device, respectively, for configuring a scheduling policy.
  • the scheduling policy includes: determining a target server by checking an online state or a resource usage rate in the plurality of back-end service servers, and controlling the server 161 Also used to allow access to each back-end service group in any one data center, the LVS cluster will generate cross-flow when forwarding traffic in multiple back-end service servers.
  • the online status of the end service server determines whether there is a faulty server in the service server, and determines the optimal target server by checking the resource usage rate of the plurality of back end service servers to determine the number of service requests processed by each service server.
  • the above embodiment of the present application is described in detail by taking an application scenario as shown in FIG. 4 as an example.
  • the VM VM can represent the corresponding user instance, and all its instances are visible to all data centers. Therefore, the LVS cluster will have traffic crossover when forwarding traffic.
  • the target server can be determined by checking the online status or resource usage rate in multiple back-end service servers, so that multiple back-end service servers can work together well to eliminate or avoid the existing network load distribution. Uneven, data traffic congestion has a long reaction time bottleneck.
  • the scheduling policy includes: determining a target server by checking an online state or a resource usage rate in the plurality of back-end service servers, and controlling the server 161 Also used in the case where multiple back-end service groups only allow access to the current standby data center, at least one back-end service server with a connection relationship assigned to each LVS in the LVS cluster is different, so that multiple back-ends are Cross-flow does not occur when forwarding traffic in a business server.
  • the proxy server proxy represents the SLB proxy component, and all its instances are visible to all data centers. Therefore, when the LVS cluster forwards traffic, traffic crossover occurs in the data center. The proxy component is only visible to the SLB of this data center. Avoid 7-layer user traffic crossing in the L4 area, adding unnecessary delay.
  • the target server can be determined by checking the online status or resource usage rate in multiple back-end service servers, so that multiple back-end service servers can work together well to eliminate or avoid the existing network load distribution. Uneven, data traffic congestion has a long reaction time bottleneck.
  • the control server 161 is further configured to configure an RDS database corresponding to the current data center, and only the current standby data center is allowed to be accessed in the RDS database. In this case, the RDS database does not generate cross-flow when storing traffic.
  • the above embodiment of the present application is described in detail by taking an application scenario as shown in FIG. 5 as an example.
  • the virtual machine VM represents the RDS database.
  • RDS is sensitive to delay. Therefore, in the configuration, the data center id of the database is specified.
  • the SLB configuration system ensures that it is only visible to the SLB of the data center, avoiding traffic crossover and reducing unnecessary delay.
  • Embodiments of the present application may provide a computer terminal, which may be any one of computer terminal groups.
  • the foregoing computer terminal may also be replaced with a terminal device such as a mobile terminal.
  • the computer terminal may be located in at least one network device of the plurality of network devices of the computer network.
  • the computer terminal may execute the program code of the following steps in the service flow control method between the data centers: a primary data center and a standby data center having a mutual standby relationship, and the primary data center and the standby data center are respectively deployed at least A load balancing device, wherein, when the primary data center is switched to the standby data center, the service traffic transmitted to the primary data center is directed to the standby data center, and the load balancing device of the standby data center allocates the service traffic.
  • FIG. 17 is a structural block diagram of a computer terminal according to Embodiment 4 of the present application.
  • the computer terminal A may include: one or more (only one shown in the figure) processor 171, memory 173, And a transmission device 175.
  • the memory 173 can be used to store a software program and a module, such as a service flow control method and a program instruction/module corresponding to the device in the data center in the embodiment of the present application, and the processor 171 runs the software program and the module stored in the memory. Thus, various functional applications and data processing are performed, that is, the above-described service flow control method between data centers is implemented.
  • Memory 173 can include high speed random access memory, and can also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 173 can further include memory remotely located relative to the processor, which can be connected to terminal A over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the processor 171 can invoke the information and the application stored in the memory by the transmission device to perform the following steps: a primary data center and a standby data center having a mutual standby relationship, and at least one load balancing device is deployed in the primary data center and the standby data center respectively.
  • a primary data center and a standby data center having a mutual standby relationship
  • at least one load balancing device is deployed in the primary data center and the standby data center respectively.
  • the service traffic transmitted to the primary data center is directed to the standby data center, and the load balancing device of the standby data center allocates the service traffic.
  • the processor 171 may further execute the following program code: the primary data center is monitored by the intermediate router, and if the primary data center is monitored to be unavailable, the primary data center is switched to the standby data center.
  • the processor 171 may further execute the following program code: the unavailable state includes at least any one of the following states: a power-off state, a fault state, an intrusion state, and an overflow state.
  • the foregoing processor 171 may further execute the following program code: set a data center with a higher priority in the data center cluster as a primary data center, and set a data center with a lower priority as a standby data center, where, in the main Before the data center is switched to the standby data center, the method further includes: synchronizing data in real time between the primary data center and the standby data center.
  • the foregoing processor 171 may further execute the following program code: the load balancing device includes any one or more of the following types: a three-layer load balancing device, a four-layer load balancing device, a five-layer load balancing device, and six layers. Load balancing device and seven-layer load balancing device.
  • the processor 171 may further execute the following program code: in the case that the load balancing device includes a four-layer load balancing device, the four-layer load balancing device of the standby data center selects the target server according to the scheduling policy; The load balancing device allocates service traffic to the target server through the LVS cluster.
  • the foregoing processor 171 may further execute the following program code: the scheduling policy includes: determining a target server by checking an online state or a resource usage rate of the multiple backend service servers, where According to the control server of the center, the scheduling policy is configured.
  • the scheduling policy includes: determining a target server by checking an online state or a resource usage rate of the multiple backend service servers, where According to the control server of the center, the scheduling policy is configured.
  • the LVS cluster When any data center allows access to each back-end service group, the LVS cluster generates cross-flow when forwarding service traffic among multiple back-end service servers.
  • the processor 171 may further execute the following program code: in the case that the load balancing device includes a seven-layer load balancing device, the seven-layer load balancing device of the standby data center selects the target server according to the scheduling policy; The load balancing device allocates service traffic to the target server through the LVS cluster.
  • the processor 171 may further execute the following program code: the scheduling policy includes: determining a target server by checking an online state or a resource usage rate of the multiple backend service servers, where the data center is controlled by the standby data center.
  • the server configures the scheduling policy.
  • at least one back-end service server with a connection relationship assigned to each LVS in the LVS cluster is different, so that Cross-flow does not occur when forwarding traffic traffic in a back-end service server.
  • the processor 171 may further execute the following program code: the data center control server is configured to configure the RDS database corresponding to the current data center, and the RDS database allows only the current standby data center to access, so that the RDS database allows only the current standby data center to access.
  • the RDS database does not generate cross-flow when it stores business traffic.
  • the primary data center and the standby data center have a mutual standby relationship, and at least one load balancing device is deployed in the primary data center and the standby data center respectively, and in the case that the primary data center is switched to the standby data center, The traffic that is transmitted to the primary data center is forwarded to the standby data center.
  • the load balancing device of the standby data center allocates service traffic to implement service traffic migration.
  • FIG. 17 is only an illustration, and the computer terminal can also be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, an applause computer, and a mobile Internet device (MID). Terminal equipment such as PAD.
  • Fig. 17 does not limit the structure of the above electronic device.
  • computer terminal A may also include more or fewer components (such as a network interface, display device, etc.) than shown in FIG. 17, or have a different configuration than that shown in FIG.
  • Embodiments of the present application also provide a storage medium.
  • the foregoing storage medium may be used to save the program code executed by the service flow control method between data centers provided in Embodiment 1 above.
  • the foregoing storage medium may be located in any one of the computer terminal groups in the computer network, or in any one of the mobile terminal groups.
  • the storage medium is configured to store program code for performing the following steps: a primary data center and a standby data center having a mutual standby relationship, and at least one of the primary data center and the standby data center are respectively deployed.
  • the load balancing device in the case that the primary data center is switched to the standby data center, directs the service traffic transmitted to the primary data center to the standby data center, and the load balancing device of the standby data center allocates the service traffic.
  • the storage medium is configured to store program code for performing the following steps: monitoring the primary data center through the intermediate router, and if the primary data center is monitored as being unavailable, the primary data center is Switch to the standby data center.
  • the storage medium is configured to store program code for performing the following steps: the unavailable state includes at least any one of the following states: a power off state, a fault state, an intrusion state, and an overflow state.
  • the storage medium is configured to store program code for performing the following steps: setting a data center with a higher priority in the data center cluster as a primary data center, and setting a data center having a lower priority to The data center includes a method in which the primary data center and the standby data center synchronize data in real time before the primary data center switches to the standby data center.
  • the storage medium is configured to store program code for performing the following steps:
  • the load balancing device includes any one or more of the following types: a three-layer load balancing device, a four-layer load balancing device, Five-layer load balancing device, six-layer load balancing device, and seven-layer load balancing device.
  • the storage medium is configured to store program code for performing the following steps: in the case that the load balancing device includes a four-layer load balancing device, the four-layer load balancing device of the standby data center is configured according to the scheduling The strategy is to select the target server; the four-tier load balancing device distributes the traffic to the target server through the LVS cluster.
  • the storage medium is configured to store program code for performing the following steps: the scheduling policy comprises: determining the target server by checking online status or resource usage rate in the plurality of backend service servers, The scheduling policy is configured by the control server of the data center.
  • the LVS cluster When any data center allows access to each back-end service group, the LVS cluster generates cross-flow when forwarding service traffic among multiple back-end service servers.
  • the storage medium is configured to store program code for performing the following steps: in the case that the load balancing device includes a seven-layer load balancing device, the seven-layer load balancing device of the standby data center is configured according to the scheduling Policy to select the target server; the seven-tier load balancing device allocates traffic to the target service through the LVS cluster Device.
  • the storage medium is configured to store program code for performing the following steps: the scheduling policy comprises: determining the target server by checking online status or resource usage rate in the plurality of backend service servers, The scheduling policy is configured by the control server of the data center.
  • the scheduling policy comprises: determining the target server by checking online status or resource usage rate in the plurality of backend service servers, The scheduling policy is configured by the control server of the data center.
  • at least one back-end with a connection relationship assigned to each LVS in the LVS cluster is configured.
  • the service servers are different, so that cross-flow does not occur when forwarding service traffic in multiple back-end service servers.
  • the storage medium is configured to store program code for performing the following steps: preparing a data center control server to configure an RDS database corresponding to the current data center, and allowing only the current standby in the RDS database.
  • the RDS database does not generate cross-flow when storing business traffic.
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium Including a number of instructions for causing a computer device (which may be a personal computer, server or network device, etc.) to perform the methods of the various embodiments of the present application. All or part of the steps.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明公开了一种数据中心间的业务流量控制方法、装置及系统。其中,该方法包括:具有互备关系的主数据中心和备数据中心,主数据中心和备数据中心分别部署了至少一个负载均衡设备,其中,在主数据中心切换至备数据中心的情况下,将传输至主数据中心的业务流量引导至备数据中心,由备数据中心的负载均衡设备对业务流量进行分配。本发明解决了现有技术中在数据中心故障、不可用时,互联网数据中心中的互联网服务中断的技术问题。

Description

数据中心间的业务流量控制方法、装置及系统
本申请要求2016年03月25日递交的申请号为201610177065.2、发明名称为“数据中心间的业务流量控制方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及负载均衡技术领域,具体而言,涉及一种数据中心间的业务流量控制方法、装置及系统。
背景技术
当今计算机技术已经进入以网络为中心的时代。互联网的告诉发展,用户数量及网络流量的迅速增长使得越来越多服务器显得负担沉重,对网络服务器的可扩展性和可用性提出了更高的要求。为了解决这个问题,互联网数据中心(IDC)应运而生。
互联网数据中心是基于网络,是互联网网络基础资源的一部分,提供了一种高端的数据传输服务和高速接入服务,不仅提供快速安全的网路,还提供对服务器监管、流量监控等网络管理方案的服务。
虽然IDC中的互联网服务集群本身,已经实现了各种冗余,包括电力、网络、服务器等。单集群可以防止“单路电力故障”、“单边网络故障”、“服务硬件故障”、“系统意外宕机”甚至“整(一)个机柜突然掉电、突然断网、突然宕机”等故障对用户对外服务造成的影响。但是更大范围的故障,比如整个数据中心不可用,已经不能从IDC中的互联网服务内部冗余来解决。
针对现有技术中在数据中心故障、不可用时,互联网数据中心中的互联网服务中断的技术问题,目前尚未提出有效的解决方案。
发明内容
本发明实施例提供了一种数据中心间的业务流量控制方法、装置及系统,以至少解决现有技术中在数据中心故障、不可用时,互联网数据中心中的互联网服务中断的技术问题。
根据本发明实施例的一个方面,提供了一种数据中心间的业务流量控制方法,包括:具有互备关系的主数据中心和备数据中心,主数据中心和备数据中心分别部署了至少一 个负载均衡设备,其中,在主数据中心切换至备数据中心的情况下,将传输至主数据中心的业务流量引导至备数据中心,由备数据中心的负载均衡设备对业务流量进行分配。
根据本发明实施例的另一方面,还提供了一种数据中心间的业务流量控制系统,包括:主数据中心,部署至少一个负载均衡设备,用于接收并转发业务流量;备数据中心,与主数据中心具有互备关系,并部署了至少一个负载均衡设备,其中,在主数据中心切换至备数据中心的情况下,将业务流量引导至备数据中心,并由备数据中心的负载均衡设备对业务流量进行分配。
根据本发明实施例的另一方面,还提供了一种数据中心间的业务流量控制装置,包括:控制模块,用于在主数据中心切换至备数据中心的情况下,将传输至主数据中心的业务流量引导至备数据中心,并由备数据中心的负载均衡设备对业务流量进行分配,其中,主数据中心和备数据中心具有互备关系,主数据中心和备数据中心分别部署至少一个负载均衡设备。
在本发明实施例中,主数据中心和备数据中心具有互备关系,主数据中心和备数据中心分别部署了至少一个负载均衡设备,在主数据中心切换至备数据中心的情况下,本方案可以将传输至主数据中心的业务流量引导至备数据中心,由备数据中心的负载均衡设备对业务流量进行分配,实现业务流量迁移。
容易注意到,主数据中心和备数据中心具有互备关系,主数据中心的数据可以实时同步到备数据中心,当主数据中心发生故障、不可用时,可以将主数据中心切换为备数据中心,由备数据中心的负载均衡设备进行流量分配。因此,通过本申请实施例所提供的方案,可以实现一旦数据中心(例如,主数据中心)发生灾难性故障,业务流量可以迅速迁移至另一数据中心(例如,备数据中心),在另一数据中心短时间内恢复服务功能,从而减少用户等待相应时间,增强网络数据处理能力,提高网络的灵活性和可用性。
由此,本发明提供的方案解决了现有技术中在数据中心故障、不可用时,互联网数据中心中的互联网服务中断的技术问题。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本申请实施例一的一种数据中心间的业务流量控制方法的计算机终端的硬件结构框图;
图2是根据本申请实施例一的一种数据中心间的业务流量控制方法的流程图;
图3是根据本申请实施例一的数据中心间的业务流量引导的示意图;
图4是根据本申请实施例一的四层负载均衡部署方式的示意图;
图5是根据本申请实施例一的七层负载均衡部署方式的示意图;
图6是根据本申请实施例一的一种可选的数据中心间的业务流量控制方法的交互图;
图7是根据本申请实施例二的一种数据中心间的业务流量控制装置的示意图;
图8是根据本申请实施例二的一种可选的数据中心间的业务流量控制装置的示意图;
图9是根据本申请实施例二的一种可选的数据中心间的业务流量控制装置的示意图;
图10是根据本申请实施例二的一种可选的数据中心间的业务流量控制装置的示意图;
图11是根据本申请实施例二的一种可选的数据中心间的业务流量控制装置的示意图;
图12是根据本申请实施例三的一种数据中心间的业务流量控制系统的示意图;
图13是根据本申请实施例三的一种可选的数据中心间的业务流量控制系统的示意图;
图14是根据本申请实施例三的一种可选的数据中心间的业务流量控制系统的示意图;
图15是根据本申请实施例三的一种可选的数据中心间的业务流量控制系统的示意图;
图16是根据本申请实施例三的一种可选的数据中心间的业务流量控制系统的示意图;以及
图17是根据本申请实施例四的一种计算机终端的结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通 技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
首先,在对本申请实施例进行描述的过程中出现的部分名词或术语适用于如下解释:
IDC:互联网数据中心,Internet Data Center的简写,是电信部门利用已有的互联网通信链路、带宽资源,建立标准化的电信专业级机房环境,为企业、政府提供服务器托管、租用以及相关增值等方面的全方位服务。
SLB:服务器负载均衡,Server Load Balance的简写,通过设置虚拟服务地址(IP),将位于同一地域(Region)的多台云服务器(Elastic Compute Service,简称ECS)资源虚拟成一个高性能、高可用的应用服务池;再根据应用指定的方式,将来自客户端的网络请求分发到云服务器池中。
BGP:边界网关协议,Border Gateway Protocol的简写,用于在不同的自治系统(AS)之间交换路由信息。当两个AS需要交换路由信息时,每个AS都必须指定一个运行BGP的节点,来代表AS与其他的AS交换路由信息。
业务迁移:是指业务从一个物理DC迁移到异地的另一物理DC中,迁移过程中,整个业务的所有资源一起迁移。
URL:统一资源定位符,Uniform Resource Locator的简写,是对可以从互联网上得到的资源的位置和访问方法的一种简洁的表示,是互联网上标准资源的地址。
LVS:四层负载均衡开源软件,一种实现在LINUX平台下的负载均衡软件。OSPF协议运行于LVS和上联交换机之间,上联交换机通过ECMP等价路由,将数据流分发给LVS集群,LVS集群再转发给业务服务器。
实施例1
根据本申请实施例,提供了一种数据中心间的业务流量控制方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执 行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
本申请实施例一所提供的方法实施例可以在移动终端、计算机终端或者类似的运算装置中执行。以运行在计算机终端上为例,图1是根据本申请实施例一的一种数据中心间的业务流量控制方法的计算机终端的硬件结构框图。如图1所示,计算机终端10可以包括一个或多个(图中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)、用于存储数据的存储器104、以及用于通信功能的传输模块106。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述电子装置的结构造成限定。例如,计算机终端10还可包括比图1中所示更多或者更少的组件,或者具有与图1所示不同的配置。
存储器104可用于存储应用软件的软件程序以及模块,如本申请实施例中的数据中心间的业务流量控制方法对应的程序指令/模块,处理器102通过运行存储在存储器104内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的数据中心间的业务流量控制方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至计算机终端10。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输装置106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括计算机终端10的通信供应商提供的无线网络。在一个实例中,传输装置106包括一个网络适配器(Network Interface Controller,NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输装置106可以为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
在上述运行环境下,本申请提供了如图2所示的数据中心间的业务流量控制方法。图2是根据本申请实施例一的一种数据中心间的业务流量控制方法的流程图,如图2所示的方法可以包括如下步骤:
步骤S22,具有互备关系的主数据中心和备数据中心,主数据中心和备数据中心分别部署了至少一个负载均衡设备,其中,在主数据中心切换至备数据中心的情况下,将传输至主数据中心的业务流量引导至备数据中心,由备数据中心的负载均衡设备对业务流量进行分配。
具体的,上述步骤中的主数据中心和备数据中心可以是同一地域(Region)下的两个数据中心(IDC机房),可以将数据中心集群中优先级高的数据中心设置为主数据中心,优先级低的数据中心设置为备数据中心。在主数据中心切换为备数据中心之后,可以将主数据中心的数据迁移到备份数据中心,主数据中心的存储设备与备数据中心的存储设备进行通信,将主数据中心的存储设备中的数据实时同步到备数据中心的存储设备,备数据中心根据业务服务器的网络信息、网络设备配置信息和业务服务器信息创建相应的业务网络和业务服务器;将传输至主数据中心的业务流量引导至备数据中心,具体方法为,主数据中心的负载均衡设备可以将用户发送的业务流量进行地址和端口转换,将用户发送的业务流量发送至备数据中心的负载均衡设备;负载均衡设备可以根据负载均衡算法,将业务流量转发至目标服务器。
图3是根据本申请实施例一的数据中心间的业务流量引导的示意图,例如,以如图3所示的应用场景为例,对本申请上述实施例进行详细说明。在阿里云IDC(aly IDC)中的互联网服务,可以将同一地域下IDC中的互联网服务的IP地址以不同的“优先级”同时在两个机房宣告(BGP路由发布),如图3所示,站点A的SLB路由器的BGP路由宣告为:X.Y.Z.0/24,站点B的SLB路由器的BGP路由宣告为:X.Y.Z.0/25,X.Y.Z.128/25,优先级高的数据中心为主数据中心(可以是图3中站点A的SLB路由器)优先级低的数据中心为备数据中心(可以是图3中站点B的SLB路由器),主数据中心和备数据中心实现了互备的关系。正常情况下1/2的VIP高优先级的运行在不同的两个IDC下,在主数据中心切换至备数据中心的情况下,可以将传输至主数据中心的业务流量引导至备数据中心,备数据中心的负载均衡设备对接收到的业务流量进行分配,通过负载均衡算法将业务流量分配至相应的业务服务器。
本申请上述实施例一公开的方案中,主数据中心和备数据中心具有互备关系,主数据中心和备数据中心分别部署了至少一个负载均衡设备,在主数据中心切换至备数据中心的情况下,本方案可以将传输至主数据中心的业务流量引导至备数据中心,由备数据中心的负载均衡设备对业务流量进行分配,实现业务流量迁移。
容易注意到,主数据中心和备数据中心具有互备关系,主数据中心的数据可以实时同步到备数据中心,当主数据中心发生故障、不可用时,可以将主数据中心切换为备数据中心,由备数据中心的负载均衡设备进行流量分配。因此,通过本申请实施例所提供的方案,可以实现一旦数据中心(例如,主数据中心)发生灾难性故障,业务流量可以迅速迁移至另一数据中心(例如,备数据中心),在另一数据中心短时间内恢复服务功 能,从而减少用户等待相应时间,增强网络数据处理能力,提高网络的灵活性和可用性。
由此,本申请提供的上述实施例一的方案解决了现有技术中在数据中心故障、不可用时,互联网数据中心中的互联网服务中断的技术问题。
在本申请上述实施例中,上述方法还可以如下步骤:步骤S24,通过中间路由器来监测主数据中心,如果监测到主数据中心处于不可用状态,则将主数据中心切换为备数据中心。
具体的,上述不可用状态至少包括如下任意一种状态:断电状态、故障状态、入侵状态和溢出状态。
在一种可选的方案中,当中间路由器检测到主数据中心不可用时,可以下发数据中心切换指令,主数据中心的存储设备在接收到数据中心切换指令之后,可以将自身的优先级调低,备数据中心的存储设备在接收到数据中心切换指令之后,可以将自身的优先级挑高,从而实现将主数据中心切换为备数据中心。
例如,仍以如图3所示的应用场景为例,对本申请上述实施例进行详细说明。在aly IDC中的互联网服务,平时“优先级高”的数据中心(可以是图3中站点A的SLB路由器)为客户提供服务,一旦该数据中心不可用,边界路由协议BGP会很快(最差的情况180秒内,正常情况下30秒内)收敛,此时,“低优先级”的数据中心就会代替故障的(高优先级)数据中心,继续为用户服务。当单个数据中心发生不可用时,例如,当主数据中心发生不可用或者故障时,可以进行故障迁移,将主数据中心的数据备份到备数据中心,并将主数据中心切换为备数据中心,由备数据中心进行业务流量分配。
通过上述步骤S24提供的方案,在主数据中心不可用时,将主数据中心切换为备数据中心,从而实现在主数据中心故障、不可用时,切换为备数据中心,由备数据中心为用户提供服务。
在本申请上述实施例中,在步骤S24,主数据中心切换至备数据中心之前,上述方法还可以包括如下步骤:步骤S26,主数据中心与备数据中心实时同步数据。
具体的,主数据中心和备数据中心具有互备的关系,主数据中心的数据可以实时地备份到备数据中心,使得当主数据中心(或者备数据中心)出现故障时,备数据中心(或者主数据中心)可以在短时间内将应用接管过来,从而保证了应用的持续性。
在一种可选的方案中,为了保证在主数据中心切换至备数据中心之后,备数据中心的负载均衡设备可以对传输至主数据中心的流量进行分配,因此,需要保证主数据中心与备数据中心的数据同步,可以将主数据中心的存储设备与备数据中心的存储设备进行 通信,实时同步主数据中心与备数据中心的数据,保证两个数据中心的数据同步。
例如,仍以如图3所示的应用场景为例,对本申请上述实施例进行详细说明。主数据中心(可以是图3中站点A的SLB路由器)和备数据中心(可以是图3中站点B的SLB路由器)可以进行通信,实时同步两个存储设备中的数据,并可以在将主数据中心切换为备数据中心的情况下,将主数据中心的数据备份到备数据中心,保证备数据中心的数据与主数据中心的数据同步。
通过上述步骤S26提供的方案,主数据中心与备数据中心可以实时同步数据,从而实现在将主数据中心切换为备数据中心之后,备数据中心的负载均衡设备可以对传输至主数据中心的业务流量进行分配,保证用户业务服务的可用性。
在本申请上述实施例中,负载均衡设备可以包括如下任意一种或多种类型:三层负载均衡设备、四层负载均衡设备、五层负载均衡设备、六层负载均衡设备和七层负载均衡设备。
具体的,上述步骤中的三层负载均衡设备基于IP地址,可以通过一个虚拟IP地址接收请求,然后分配到真实的IP地址;四层负载均衡设备基于IP地址和端口,可以通过虚拟IP地质和端口接收请求,然后在分配到真实的服务器;七层负载均衡设备基于URL等应用层信息,可以通过虚拟的URL地质或主机名接收请求,然后在分配到真实的服务器。
在一种可选的方案中,四层负载均衡设备可以通过发布三层的IP地址(VIP),然后加四层的端口号,来确定需要进行负载均衡处理的流量,将需要进行负载均衡处理的流量转发至后台服务器,并保存转发后的后台服务器的标识信息,从而确保后续的所有流量都由同一台服务器处理。
在另一种可选的方案中,七层负载均衡设备可以在四层负载均衡设备的基础上,增加应用层的特征,例如,URL地址,HTTP协议或Cookie等信息,来确定需要进行负载均衡处理的流量。
在本申请上述实施例中,在负载均衡设备包括四层负载均衡设备的情况下,步骤S22,由备数据中心的负载均衡设备对业务流量进行分配可以包括如下步骤:
步骤S222,备数据中心的四层负载均衡设备根据调度策略来选择目标服务器。
步骤S224,四层负载均衡设备将业务流量通过LVS集群分配给目标服务器。
具体的,上述步骤中的调度策略可以包括轮询方式、URL调度策略、URL哈希调度策略或一致性哈希调度策略,但不仅限于此。四层负载均衡设备可以通过ECMP等价路 由,将数据流量发送给LVS集群,再由LVS集群转发给目标服务器。
在一种可选的方案中,四层负载均衡设备与多个服务器连接,在接收到第一网络的用户发送的请求报文之后,可以将请求报文进行地址(包括源地址和目标地址)和端口转换,生成第二网络的请求报文,并采用调度策略从多个服务器中确定目标服务器,由LVS集群将第二网络的请求报文发送给相应的目标服务器。目标服务器可以利用源地址映射方式将返回的第二网络的响应报文返回至四层负载均衡设备,四层负载均衡设备在接收到第二网络的响应报文之后,对第二网络的响应报文进行地址和端口转换,生成第一网络的响应报文,并将第一网络的响应报文返回至用户。
此处需要说明的是,第一网络的请求报文和第一网络的响应报文属于同一个网络类型的报文,第二网络的请求报文和第二网络的响应报文属于同一个网络类型的报文。
图4是根据本申请实施例一的四层负载均衡部署方式的示意图,例如,以如图4所示的应用场景为例,对本申请上述实施例进行详细说明。对于SLB公有云4层用户,4层区域中,虚拟机VM代表对应用户实例。代理服务器proxy代表SLB的proxy组件,可以表示四层负载均衡设备。数据中心的SLB可以通过健康检查来引导业务流量。常态下,一个监听的流量只通过一个数据中心转发。在将主数据中心(可以是图4中的站点A)切换为备数据中心(可以是图4中的站点B)的情况下,备数据中心的四层负载均衡设备根据调度策略来选择目标服务器,并将业务流量通过LVS集群分配给目标服务器。
通过上述步骤S222至步骤S224提供的方案,负载均衡设备可以通过调度策略确定目标服务器,并通过LVS集群分配给目标服务器,从而保证用户服务的可用性,提高了负载均衡服务的稳定性。
在本申请上述实施例中,调度策略包括:通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,其中,通过备数据中心的控制服务器来配置调度策略,在任意一个数据中心允许访问每个后端业务群的情况下,LVS集群在多个后端业务服务器中转发业务流量时会产生交叉流。
在一种可选的方案中,为了使一台处理服务请求较少的服务器能分配到更多的服务请求,或者出现故障的服务器将不再接受服务请求直至故障恢复,可以通过检查多个后端业务服务器的在线状态确定业务服务器中是否存在出现故障的服务器,并通过检查多个后端业务服务器的资源使用率确定每个业务服务器处理的服务请求的数量,确定最优的目标服务器。
例如,仍以如图4所示的应用场景为例,对本申请上述实施例进行详细说明。对于 SLB公有云4层用户,4层区域中,虚拟机VM可以代表对应用户实例,其所有实例对所有数据中心都可见,因此,LVS集群在转发业务流量时会出现流量交叉。
通过上述方案,可以通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,从而使多个后端业务服务器能很好的共同完成任务,消除或避免现有网络负载分布不均、数据流量拥挤反应时间长的瓶颈。
在本申请上述实施例中,在负载均衡设备包括七层负载均衡设备的情况下,步骤S22,由备数据中心的负载均衡设备对业务流量进行分配可以包括如下步骤:
步骤S226,备数据中心的七层负载均衡设备根据调度策略来选择目标服务器。
步骤S228,七层负载均衡设备将业务流量通过LVS集群分配给目标服务器。
具体的,上述步骤中的调度策略可以与四层负载均衡设备的调度策略相同,也可以不同。七层负载均衡设备可以通过ECMP等价路由,将数据流量发送给LVS集群,再由LVS集群转发给目标服务器。
在一种可选的方案中,七层负载均衡设备与多个服务器连接,在接收到第一网络的用户发送的请求报文之后,可以通过代理服务器和客户端建立连接,接受到客户端发送的真正应用层内容的报文,然后根据该报文中的特定字段(例如HTTP报文的报头),再根据调度策略,确定目标服务器。
此处需要说明的是,负载均衡设备在这种情况下,更类似于一个代理服务器。负载均衡和前端的客户端以及后端的服务器会分别建立TCP连接。因此,七层负载均衡设备的要求更高,处理能力低于四层负载均衡设备。
图5是根据本申请实施例一的七层负载均衡部署方式的示意图,例如,以如图5所示的应用场景为例,对本申请上述实施例进行详细说明。对于SLB公有云7层用户,4层区域中,代理服务器proxy代表SLB的proxy组件,可以表示七层负载均衡设备。数据中心的SLB可以通过健康检查来引导业务流量。常态下,一个监听的流量只通过一个数据中心转发。在将主数据中心(可以是图5中的站点A)切换为备数据中心(可以是图5中的站点B)的情况下,备数据中心的七层负载均衡设备根据调度策略来选择目标服务器,并将业务流量通过LVS集群分配给目标服务器。
通过上述步骤S226至步骤S228提供的方案,负载均衡设备可以通过调度策略确定目标服务器,并通过LVS集群分配给目标服务器,从而保证用户服务的可用性,避免应用层故障,提高了负载均衡服务的稳定性。
在本申请上述实施例中,调度策略包括:通过检查多个后端业务服务器中的在线状 态或资源使用率来确定目标服务器,其中,通过备数据中心的控制服务器来配置调度策略,在多个后端业务群仅允许当前的备数据中心访问的情况下,LVS集群中分配给每个LVS的至少一个具有连接关系的后端业务服务器均不相同,使得多个后端业务服务器中转发业务流量时不会产生交叉流。
在一种可选的方案中,为了使一台处理服务请求较少的服务器能分配到更多的服务请求,或者出现故障的服务器将不再接受服务请求直至故障恢复,可以通过检查多个后端业务服务器的在线状态确定业务服务器中是否存在出现故障的服务器,并通过检查多个后端业务服务器的资源使用率确定每个业务服务器处理的服务请求的数量,确定最优的目标服务器。
例如,仍以如图5所示的应用场景为例,对本申请上述实施例进行详细说明。对于SLB公有云7层用户,4层区域中,代理服务器proxy代表SLB的proxy组件,其所有实例对所有数据中心都可见,因此,LVS集群在转发业务流量时会出现流量交叉,数据中心中的proxy组件只对本数据中心的SLB可见。避免7层用户流量在L4区域中交叉,增加不必要的延时。
通过上述方案,可以通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,从而使多个后端业务服务器能很好的共同完成任务,消除或避免现有网络负载分布不均、数据流量拥挤反应时间长的瓶颈。
在本申请上述实施例中,备数据中心的控制服务器来配置当前数据中心所对应的RDS数据库,在RDS数据库仅允许当前的备数据中心访问的情况下,使得RDS数据库存储业务流量时不会产生交叉流。
例如,以如图5所示的应用场景为例,对本申请上述实施例进行详细说明。对于RDS用户,4层区域中,虚拟机VM代表RDS数据库。RDS对延时敏感,因此在配置中,指定其数据库所在的数据中心id,由SLB配置系统保证其只对本数据中心的SLB可见,避免流量交叉,减少不必要的延时。
下面结合图3,图4,图5和图6详细介绍本申请的一种优选实施例。
如6所示,以为应用场景,提供了一种可选的数据中心间的业务流量控制方法,该方法可以包括如下步骤S61至步骤S64:
步骤S61,主数据中心121与备数据中心123实时同步数据。
可选的,主数据中心和备数据中心可以具有互备关系,主数据中心的数据可以实时备份到备数据中心中。
步骤S62,中间路由器131监测主数据据中心121的状态,当监测到主数据中心处于不可用状态,将主数据中心切换为备数据中心。
可选的,当中间路由器检测到主数据中心处于断电状态、故障状态、入侵状态或者溢出状态时,确定主数据中心处于不可用状态,将主数据中心的优先级调低,备数据中心的优先级挑高,从而将主数据中心切换为备数据中心。
步骤S63,中间路由器131将传输至主数据中心的业务流量引导至备数据中心123。
可选的,主数据中心的负载均衡设备可以将用户发送的业务流量进行地址和端口转换,将用户发送的业务流量发送至备数据中心的负载均衡设备。
步骤S64,备数据中心123的负载均衡设备对业务流量进行分配。
可选的,负载均衡设备可以为:三层负载均衡设备、四层负载均衡设备、五层负载均衡设备、六层负载均衡设备和七层负载均衡设备。负载均衡设备可以根据调度策略选择目标服务器,并将业务流量通过LVS集群分配给目标服务器。
通过上述方案,主数据中心与备数据中心可以实时同步数据,当监测到主数据中心处于不可用状态,将主数据中心切换为备数据中心,并将传输至主数据中心的业务流量引导至备数据中心,由备数据中心的负载均衡设备对业务流量进行分配,从而实现当整个数据中心故障、不可用时,IDC中的互联网服务仍然有能力在较短的时间内恢复服务。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的数据中心间的业务流量控制方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
实施例2
根据本申请实施例,还提供了一种用于数据中心间的业务流量控制方法的数据中心 间的业务流量控制装置,如图7所示,该装置包括:控制模块71。
其中,控制模块71用于在主数据中心切换至备数据中心的情况下,将传输至主数据中心的业务流量引导至备数据中心,并由备数据中心的负载均衡设备对业务流量进行分配,其中,主数据中心和备数据中心具有互备关系,主数据中心和备数据中心分别部署了至少一个负载均衡设备。
具体的,上述步骤中的主数据中心和备数据中心可以是同一地域(Region)下的两个数据中心(IDC机房),可以将数据中心集群中优先级高的数据中心设置为主数据中心,优先级低的数据中心设置为备数据中心。在主数据中心切换为备数据中心之后,可以将主数据中心的数据迁移到备份数据中心,主数据中心的存储设备与备数据中心的存储设备进行通信,将主数据中心的存储设备中的数据实时同步到备数据中心的存储设备,备数据中心根据业务服务器的网络信息、网络设备配置信息和业务服务器信息创建相应的业务网络和业务服务器;将传输至主数据中心的业务流量引导至备数据中心,具体方法为,主数据中心的负载均衡设备可以将用户发送的业务流量进行地址和端口转换,将用户发送的业务流量发送至备数据中心的负载均衡设备;负载均衡设备可以根据负载均衡算法,将业务流量转发至目标服务器。
此处需要说明的是,上述控制模块71对应于实施例1中的步骤S22,该模块与对应的步骤所实现的实例和应用场景相同,但不限于上述实施例二所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在实施例1提供的计算机终端10中。
本申请上述实施例2公开的方案中,主数据中心和备数据中心具有互备关系,主数据中心和备数据中心分别部署了至少一个负载均衡设备,在主数据中心切换至备数据中心的情况下,本方案可以将传输至主数据中心的业务流量引导至备数据中心,由备数据中心的负载均衡设备对业务流量进行分配,实现业务流量迁移。
容易注意到,主数据中心和备数据中心具有互备关系,主数据中心的数据可以实时同步到备数据中心,当主数据中心发生故障、不可用时,可以将主数据中心切换为备数据中心,由备数据中心的负载均衡设备进行流量分配。因此,通过本申请实施例所提供的方案,可以实现一旦数据中心(例如,主数据中心)发生灾难性故障,业务流量可以迅速迁移至另一数据中心(例如,备数据中心),在另一数据中心短时间内恢复服务功能,从而减少用户等待相应时间,增强网络数据处理能力,提高网络的灵活性和可用性。
由此,本申请提供的上述实施例2的方案解决了现有技术中在数据中心故障、不可用时,互联网数据中心中的互联网服务中断的技术问题。
在本申请上述实施例中,如图8所示,上述装置还可以包括:切换模块81。
其中,切换模块81用于监测主数据中心,如果监测到主数据中心处于不可用状态,则将主数据中心切换为备数据中心。
具体的,上述不可用状态至少包括如下任意一种状态:断电状态、故障状态、入侵状态和溢出状态。
此处需要说明的是,上述切换模块81对应于实施例1中的步骤S24,该模块与对应的步骤所实现的实例和应用场景相同,但不限于上述实施例二所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在实施例1提供的计算机终端10中。
通过上述方案,在主数据中心不可用时,将主数据中心切换为备数据中心,从而实现在主数据中心故障、不可用时,切换为备数据中心,由备数据中心为用户提供服务。
在本申请上述实施例中,如图9所示,上述装置还可以包括:设置模块91和同步模块93。
其中,设置模块91,用于将数据中心集群中优先级高的数据中心设置为所述主数据中心,优先级低的数据中心设置为所述备数据中心;同步模块93用于主数据中心与备数据中心实时同步数据。
具体的,主数据中心和备数据中心具有互备的关系,主数据中心的数据可以实时地备份到备数据中心,使得当主数据中心(或者备数据中心)出现故障时,备数据中心(或者主数据中心)可以在短时间内将应用接管过来,从而保证了应用的持续性。
此处需要说明的是,上述同步模块93对应于实施例1中的步骤S26,该模块与对应的步骤所实现的实例和应用场景相同,但不限于上述实施例二所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在实施例1提供的计算机终端10中。
通过上述提供的方案,主数据中心与备数据中心可以实时同步数据,从而实现在将主数据中心切换为备数据中心之后,备数据中心的负载均衡设备可以对传输至主数据中心的业务流量进行分配,保证用户业务服务的可用性。
在本申请上述实施例中,负载均衡设备包括如下任意一种或多种类型:三层负载均衡设备、四层负载均衡设备、五层负载均衡设备、六层负载均衡设备和七层负载均衡设备。
具体的,上述步骤中的三层负载均衡设备基于IP地址,可以通过一个虚拟IP地址接收请求,然后分配到真实的IP地址;四层负载均衡设备基于IP地址和端口,可以通过虚拟IP地质和端口接收请求,然后在分配到真实的服务器;七层负载均衡设备基于 URL等应用层信息,可以通过虚拟的URL地质或主机名接收请求,然后在分配到真实的服务器。
在本申请上述实施例中,如图10所示,在负载均衡设备包括四层负载均衡设备的情况下,控制模块71还可以包括:第一选择子模块101和第一分配子模块103。
其中,第一选择子模块101用于根据调度策略来选择目标服务器;第一分配子模块103用于将业务流量通过LVS集群分配给目标服务器。
具体的,上述步骤中的调度策略可以包括轮询方式、URL调度策略、URL哈希调度策略或一致性哈希调度策略,但不仅限于此。四层负载均衡设备可以通过ECMP等价路由,将数据流量发送给LVS集群,再由LVS集群转发给目标服务器。
此处需要说明的是,上述第一选择子模块101和第一分配子模块103对应于实施例1中的步骤S222至步骤S224,两个模块与对应的步骤所实现的实例和应用场景相同,但不限于上述实施例二所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在实施例1提供的计算机终端10中。
通过上述方案,负载均衡设备可以通过调度策略确定目标服务器,并通过LVS集群分配给目标服务器,从而保证用户服务的可用性,提高了负载均衡服务的稳定性。
在本申请上述实施例中,调度策略包括:通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,其中,通过备数据中心的控制服务器来配置调度策略,在任意一个数据中心允许访问每个后端业务群的情况下,LVS集群在多个后端业务服务器中转发业务流量时会产生交叉流。
通过上述方案,可以通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,从而使多个后端业务服务器能很好的共同完成任务,消除或避免现有网络负载分布不均、数据流量拥挤反应时间长的瓶颈。
在本申请上述实施例中,如图11所示,在负载均衡设备包括七层负载均衡设备的情况下,控制模块71还可以包括:第二选择子模块111和第二分配子模块113。
其中,第二选择子模块111用于根据调度策略来选择目标服务器;第二分配子模块113用于将业务流量通过LVS集群分配给目标服务器。
具体的,上述步骤中的调度策略可以与四层负载均衡设备的调度策略相同,也可以不同。七层负载均衡设备可以通过ECMP等价路由,将数据流量发送给LVS集群,再由LVS集群转发给目标服务器。
此处需要说明的是,负载均衡设备在这种情况下,更类似于一个代理服务器。负载 均衡和前端的客户端以及后端的服务器会分别建立TCP连接。因此,七层负载均衡设备的要求更高,处理能力低于四层负载均衡设备。
此处需要说明的是,上述第二选择子模块111和第二分配子模块113对应于实施例1中的步骤S226至步骤S228,两个模块与对应的步骤所实现的实例和应用场景相同,但不限于上述实施例二所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在实施例1提供的计算机终端10中。
通过上述方案,负载均衡设备可以通过调度策略确定目标服务器,并通过LVS集群分配给目标服务器,从而保证用户服务的可用性,避免应用层故障,提高了负载均衡服务的稳定性。
在本申请上述实施例中,调度策略包括:通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,其中,通过备数据中心的控制服务器来配置调度策略,在多个后端业务群仅允许当前的备数据中心访问的情况下,LVS集群中分配给每个LVS的至少一个具有连接关系的后端业务服务器均不相同,使得多个后端业务服务器中转发业务流量时不会产生交叉流。
通过上述方案,可以通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,从而使多个后端业务服务器能很好的共同完成任务,消除或避免现有网络负载分布不均、数据流量拥挤反应时间长的瓶颈。
在本申请上述实施例中,备数据中心的控制服务器来配置当前数据中心所对应的RDS数据库,在RDS数据库仅允许当前的备数据中心访问的情况下,使得RDS数据库存储业务流量时不会产生交叉流。
实施例3
根据本申请实施例,还提供了一种数据中心间的业务流量控制系统,如图12所示,该系统可以包括:主数据中心121和备数据中心123。
其中,主数据中心121,部署至少一个负载均衡设备,用于接收并转发业务流量;备数据中心123,与主数据中心121具有互备关系,并部署了至少一个负载均衡设备,其中,在主数据中心切换至备数据中心的情况下,将业务流量引导至备数据中心,并由备数据中心的负载均衡设备对业务流量进行分配。
具体的,上述步骤中的主数据中心和备数据中心可以是同一地域(Region)下的两个数据中心(IDC机房),可以将数据中心集群中优先级高的数据中心设置为主数据中心,优先级低的数据中心设置为备数据中心。在主数据中心切换为备数据中心之后,可 以将主数据中心的数据迁移到备份数据中心,主数据中心的存储设备与备数据中心的存储设备进行通信,将主数据中心的存储设备中的数据实时同步到备数据中心的存储设备,备数据中心根据业务服务器的网络信息、网络设备配置信息和业务服务器信息创建相应的业务网络和业务服务器;将传输至主数据中心的业务流量引导至备数据中心,具体方法为,主数据中心的负载均衡设备可以将用户发送的业务流量进行地址和端口转换,将用户发送的业务流量发送至备数据中心的负载均衡设备;负载均衡设备可以根据负载均衡算法,将业务流量转发至目标服务器。
例如,以如图3所示的应用场景为例,对本申请上述实施例进行详细说明。在阿里云IDC(aly IDC)中的互联网服务,可以将同一地域下IDC中的互联网服务的IP地址以不同的“优先级”同时在两个机房宣告(BGP路由发布),如图3所示,站点A的SLB路由器的BGP路由宣告为:X.Y.Z.0/24,站点B的SLB路由器的BGP路由宣告为:X.Y.Z.0/25,X.Y.Z.128/25,优先级高的数据中心为主数据中心(可以是图3中站点A的SLB路由器)优先级低的数据中心为备数据中心(可以是图3中站点B的SLB路由器),主数据中心和备数据中心实现了互备的关系。正常情况下1/2的VIP高优先级的运行在不同的两个IDC下,在主数据中心切换至备数据中心的情况下,可以将传输至主数据中心的业务流量引导至备数据中心,备数据中心的负载均衡设备对接收到的业务流量进行分配,通过负载均衡算法将业务流量分配至相应的业务服务器。
本申请上述实施例3公开的方案中,主数据中心和备数据中心具有互备关系,主数据中心和备数据中心分别部署了至少一个负载均衡设备,在主数据中心切换至备数据中心的情况下,本方案可以将传输至主数据中心的业务流量引导至备数据中心,由备数据中心的负载均衡设备对业务流量进行分配,实现业务流量迁移。
容易注意到,主数据中心和备数据中心具有互备关系,主数据中心的数据可以实时同步到备数据中心,当主数据中心发生故障、不可用时,可以将主数据中心切换为备数据中心,由备数据中心的负载均衡设备进行流量分配。因此,通过本申请实施例所提供的方案,可以实现一旦数据中心(例如,主数据中心)发生灾难性故障,业务流量可以迅速迁移至另一数据中心(例如,备数据中心),在另一数据中心短时间内恢复服务功能,从而减少用户等待相应时间,增强网络数据处理能力,提高网络的灵活性和可用性。
由此,本申请提供的上述实施例3的方案解决了现有技术中在数据中心故障、不可用时,互联网数据中心中的互联网服务中断的技术问题。
在本申请上述实施例中,如图13所示,上述系统还包括:中间路由器131。
其中,中间路由器131用于监测主数据中心,如果监测到主数据中心处于不可用状态,则将主数据中心切换为备数据中心。
具体的,上述不可用状态至少包括如下任意一种状态:断电状态、故障状态、入侵状态和溢出状态。
在一种可选的方案中,当中间路由器检测到主数据中心不可用时,可以下发数据中心切换指令,主数据中心的存储设备在接收到数据中心切换指令之后,可以将自身的优先级调低,备数据中心的存储设备在接收到数据中心切换指令之后,可以将自身的优先级挑高,从而实现将主数据中心切换为备数据中心。
例如,仍以如图3所示的应用场景为例,对本申请上述实施例进行详细说明。在aly IDC中的互联网服务,平时“优先级高”的数据中心(可以是图3中站点A的SLB路由器)为客户提供服务,一旦该数据中心不可用,边界路由协议BGP会很快(最差的情况180秒内,正常情况下30秒内)收敛,此时,“低优先级”的数据中心就会代替故障的(高优先级)数据中心,继续为用户服务。当单个数据中心发生不可用时,例如,当主数据中心发生不可用或者故障时,可以进行故障迁移,将主数据中心的数据备份到备数据中心,并将主数据中心切换为备数据中心,由备数据中心进行业务流量分配。
通过上述方案,在主数据中心不可用时,将主数据中心切换为备数据中心,从而实现在主数据中心故障、不可用时,切换为备数据中心,由备数据中心为用户提供服务。
在本申请上述实施例中,主数据中心121还用于在主数据中心切换至备数据中心之前,与备数据中心实时同步数据。
具体的,主数据中心和备数据中心具有互备的关系,主数据中心的数据可以实时地备份到备数据中心,使得当主数据中心(或者备数据中心)出现故障时,备数据中心(或者主数据中心)可以在短时间内将应用接管过来,从而保证了应用的持续性。
在一种可选的方案中,为了保证在主数据中心切换至备数据中心之后,备数据中心的负载均衡设备可以对传输至主数据中心的流量进行分配,因此,需要保证主数据中心与备数据中心的数据同步,可以将主数据中心的存储设备与备数据中心的存储设备进行通信,实时同步主数据中心与备数据中心的数据,保证两个数据中心的数据同步。
例如,仍以如图3所示的应用场景为例,对本申请上述实施例进行详细说明。主数据中心(可以是图3中站点A的SLB路由器)和备数据中心(可以是图3中站点B的SLB路由器)可以进行通信,实时同步两个存储设备中的数据,并可以在将主数据中心切换为备数据中心的情况下,将主数据中心的数据备份到备数据中心,保证备数据中心 的数据与主数据中心的数据同步。
通过上述方案,主数据中心与备数据中心可以实时同步数据,从而实现在将主数据中心切换为备数据中心之后,备数据中心的负载均衡设备可以对传输至主数据中心的业务流量进行分配,保证用户业务服务的可用性。
在本申请上述实施例中,负载均衡设备包括如下任意一种或多种类型:三层负载均衡设备、四层负载均衡设备、五层负载均衡设备、六层负载均衡设备和七层负载均衡设备。
具体的,上述步骤中的三层负载均衡设备基于IP地址,可以通过一个虚拟IP地址接收请求,然后分配到真实的IP地址;四层负载均衡设备基于IP地址和端口,可以通过虚拟IP地质和端口接收请求,然后在分配到真实的服务器;七层负载均衡设备基于URL等应用层信息,可以通过虚拟的URL地质或主机名接收请求,然后在分配到真实的服务器。
在一种可选的方案中,四层负载均衡设备可以通过发布三层的IP地址(VIP),然后加四层的端口号,来确定需要进行负载均衡处理的流量,将需要进行负载均衡处理的流量转发至后台服务器,并保存转发后的后台服务器的标识信息,从而确保后续的所有流量都由同一台服务器处理。
在另一种可选的方案中,七层负载均衡设备可以在四层负载均衡设备的基础上,增加应用层的特征,例如,URL地址,HTTP协议或Cookie等信息,来确定需要进行负载均衡处理的流量。
在本申请上述实施例中,如图14所示,负载均衡设备包括:四层负载均衡设备141。
其中,四层负载均衡设备141用于根据调度策略来选择目标服务器,并将业务流量通过LVS集群分配给目标服务器。
具体的,上述步骤中的调度策略可以包括轮询方式、URL调度策略、URL哈希调度策略或一致性哈希调度策略,但不仅限于此。四层负载均衡设备可以通过ECMP等价路由,将数据流量发送给LVS集群,再由LVS集群转发给目标服务器。
在一种可选的方案中,四层负载均衡设备与多个服务器连接,在接收到第一网络的用户发送的请求报文之后,可以将请求报文进行地址(包括源地址和目标地址)和端口转换,生成第二网络的请求报文,并采用调度策略从多个服务器中确定目标服务器,由LVS集群将第二网络的请求报文发送给相应的目标服务器。目标服务器可以利用源地址映射方式将返回的第二网络的响应报文返回至四层负载均衡设备,四层负载均衡设备在 接收到第二网络的响应报文之后,对第二网络的响应报文进行地址和端口转换,生成第一网络的响应报文,并将第一网络的响应报文返回至用户。
此处需要说明的是,第一网络的请求报文和第一网络的响应报文属于同一个网络类型的报文,第二网络的请求报文和第二网络的响应报文属于同一个网络类型的报文。
例如,以如图4所示的应用场景为例,对本申请上述实施例进行详细说明。对于SLB公有云4层用户,4层区域中,虚拟机VM代表对应用户实例。数据中心的SLB可以通过健康检查来引导业务流量。常态下,一个监听的流量只通过一个数据中心转发。在将主数据中心(可以是图4中的站点A)切换为备数据中心(可以是图4中的站点B)的情况下,备数据中心的四层负载均衡设备根据调度策略来选择目标服务器,并将业务流量通过LVS集群分配给目标服务器。
通过上述方案,负载均衡设备可以通过调度策略确定目标服务器,并通过LVS集群分配给目标服务器,从而保证用户服务的可用性,提高了负载均衡服务的稳定性。
在本申请上述实施例中,如图15所示,负载均衡设备包括:七层负载均衡设备151。
其中,七层负载均衡设备151用于根据调度策略来选择目标服务器,并将业务流量通过LVS集群分配给目标服务器。
具体的,上述步骤中的调度策略可以与四层负载均衡设备的调度策略相同,也可以不同。七层负载均衡设备可以通过ECMP等价路由,将数据流量发送给LVS集群,再由LVS集群转发给目标服务器。
在一种可选的方案中,七层负载均衡设备与多个服务器连接,在接收到第一网络的用户发送的请求报文之后,可以通过代理服务器和客户端建立连接,接受到客户端发送的真正应用层内容的报文,然后根据该报文中的特定字段(例如HTTP报文的报头),再根据调度策略,确定目标服务器。
此处需要说明的是,负载均衡设备在这种情况下,更类似于一个代理服务器。负载均衡和前端的客户端以及后端的服务器会分别建立TCP连接。因此,七层负载均衡设备的要求更高,处理能力低于四层负载均衡设备。
例如,以如图5所示的应用场景为例,对本申请上述实施例进行详细说明。对于SLB公有云7层用户,4层区域中,代理服务器proxy代表SLB的proxy组件。数据中心的SLB可以通过健康检查来引导业务流量。常态下,一个监听的流量只通过一个数据中心转发。在将主数据中心(可以是图5中的站点A)切换为备数据中心(可以是图5中的站点B)的情况下,备数据中心的七层负载均衡设备根据调度策略来选择目标服务器, 并将业务流量通过LVS集群分配给目标服务器。
通过上述方案,负载均衡设备可以通过调度策略确定目标服务器,并通过LVS集群分配给目标服务器,从而保证用户服务的可用性,避免应用层故障,提高了负载均衡服务的稳定性。
在本申请上述实施例中,如图16所示,备数据中心121还包括:控制服务器161。
其中,控制服务器161分别与四层负载均衡设备和七层负载均衡设备连接,用于配置调度策略。
在本申请上述实施例中,在负载均衡设备包括四层负载均衡设备的情况下,调度策略包括:通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,控制服务器161还用于在任意一个数据中心允许访问每个后端业务群的情况下,LVS集群在多个后端业务服务器中转发业务流量时会产生交叉流。
在一种可选的方案中,为了使一台处理服务请求较少的服务器能分配到更多的服务请求,或者出现故障的服务器将不再接受服务请求直至故障恢复,可以通过检查多个后端业务服务器的在线状态确定业务服务器中是否存在出现故障的服务器,并通过检查多个后端业务服务器的资源使用率确定每个业务服务器处理的服务请求的数量,确定最优的目标服务器。
例如,仍以如图4所示的应用场景为例,对本申请上述实施例进行详细说明。对于SLB公有云4层用户,4层区域中,虚拟机VM可以代表对应用户实例,其所有实例对所有数据中心都可见,因此,LVS集群在转发业务流量时会出现流量交叉。
通过上述方案,可以通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,从而使多个后端业务服务器能很好的共同完成任务,消除或避免现有网络负载分布不均、数据流量拥挤反应时间长的瓶颈。
在本申请上述实施例中,在负载均衡设备包括七层负载均衡设备的情况下,调度策略包括:通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,控制服务器161还用于在多个后端业务群仅允许当前的备数据中心访问的情况下,LVS集群中分配给每个LVS的至少一个具有连接关系的后端业务服务器均不相同,使得多个后端业务服务器中转发业务流量时不会产生交叉流。
在一种可选的方案中,为了使一台处理服务请求较少的服务器能分配到更多的服务请求,或者出现故障的服务器将不再接受服务请求直至故障恢复,可以通过检查多个后端业务服务器的在线状态确定业务服务器中是否存在出现故障的服务器,并通过检查多 个后端业务服务器的资源使用率确定每个业务服务器处理的服务请求的数量,确定最优的目标服务器。
例如,仍以如图5所示的应用场景为例,对本申请上述实施例进行详细说明。对于SLB公有云7层用户,4层区域中,代理服务器proxy代表SLB的proxy组件,其所有实例对所有数据中心都可见,因此,LVS集群在转发业务流量时会出现流量交叉,数据中心中的proxy组件只对本数据中心的SLB可见。避免7层用户流量在L4区域中交叉,增加不必要的延时。
通过上述方案,可以通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,从而使多个后端业务服务器能很好的共同完成任务,消除或避免现有网络负载分布不均、数据流量拥挤反应时间长的瓶颈。
在本申请上述实施例中,在负载均衡设备包括七层负载均衡设备的情况下,控制服务器161还用于配置当前数据中心所对应的RDS数据库,在RDS数据库仅允许当前的备数据中心访问的情况下,使得RDS数据库存储业务流量时不会产生交叉流。
例如,以如图5所示的应用场景为例,对本申请上述实施例进行详细说明。对于RDS用户,4层区域中,虚拟机VM代表RDS数据库。RDS对延时敏感,因此在配置中,指定其数据库所在的数据中心id,由SLB配置系统保证其只对本数据中心的SLB可见,避免流量交叉,减少不必要的延时。
实施例4
本申请的实施例可以提供一种计算机终端,该计算机终端可以是计算机终端群中的任意一个计算机终端设备。可选地,在本实施例中,上述计算机终端也可以替换为移动终端等终端设备。
可选地,在本实施例中,上述计算机终端可以位于计算机网络的多个网络设备中的至少一个网络设备。
在本实施例中,上述计算机终端可以执行数据中心间的业务流量控制方法中以下步骤的程序代码:具有互备关系的主数据中心和备数据中心,主数据中心和备数据中心分别部署了至少一个负载均衡设备,其中,在主数据中心切换至备数据中心的情况下,将传输至主数据中心的业务流量引导至备数据中心,由备数据中心的负载均衡设备对业务流量进行分配。
可选地,图17是根据本申请实施例四的一种计算机终端的结构框图。如图17所示,该计算机终端A可以包括:一个或多个(图中仅示出一个)处理器171、存储器173、 以及传输装置175。
其中,存储器173可用于存储软件程序以及模块,如本申请实施例中的数据中心间的业务流量控制方法和装置对应的程序指令/模块,处理器171通过运行存储在存储器内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的数据中心间的业务流量控制方法。存储器173可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器173可进一步包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至终端A。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
处理器171可以通过传输装置调用存储器存储的信息及应用程序,以执行下述步骤:具有互备关系的主数据中心和备数据中心,主数据中心和备数据中心分别部署了至少一个负载均衡设备,其中,在主数据中心切换至备数据中心的情况下,将传输至主数据中心的业务流量引导至备数据中心,由备数据中心的负载均衡设备对业务流量进行分配。
可选的,上述处理器171还可以执行如下步骤的程序代码:通过中间路由器来监测主数据中心,如果监测到主数据中心处于不可用状态,则将主数据中心切换为备数据中心。
可选的,上述处理器171还可以执行如下步骤的程序代码:不可用状态至少包括如下任意一种状态:断电状态、故障状态、入侵状态和溢出状态。
可选的,上述处理器171还可以执行如下步骤的程序代码:将数据中心集群中优先级高的数据中心设置为主数据中心,优先级低的数据中心设置为备数据中心,其中,在主数据中心切换至备数据中心之前,方法还包括:主数据中心与备数据中心实时同步数据。
可选的,上述处理器171还可以执行如下步骤的程序代码:负载均衡设备包括如下任意一种或多种类型:三层负载均衡设备、四层负载均衡设备、五层负载均衡设备、六层负载均衡设备和七层负载均衡设备。
可选的,上述处理器171还可以执行如下步骤的程序代码:在负载均衡设备包括四层负载均衡设备的情况下,备数据中心的四层负载均衡设备根据调度策略来选择目标服务器;四层负载均衡设备将业务流量通过LVS集群分配给目标服务器。
可选的,上述处理器171还可以执行如下步骤的程序代码:调度策略包括:通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,其中,通过备数 据中心的控制服务器来配置调度策略,在任意一个数据中心允许访问每个后端业务群的情况下,LVS集群在多个后端业务服务器中转发业务流量时会产生交叉流。
可选的,上述处理器171还可以执行如下步骤的程序代码:在负载均衡设备包括七层负载均衡设备的情况下,备数据中心的七层负载均衡设备根据调度策略来选择目标服务器;七层负载均衡设备将业务流量通过LVS集群分配给目标服务器。
可选的,上述处理器171还可以执行如下步骤的程序代码:调度策略包括:通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,其中,通过备数据中心的控制服务器来配置调度策略,在多个后端业务群仅允许当前的备数据中心访问的情况下,LVS集群中分配给每个LVS的至少一个具有连接关系的后端业务服务器均不相同,使得多个后端业务服务器中转发业务流量时不会产生交叉流。
可选的,上述处理器171还可以执行如下步骤的程序代码:备数据中心的控制服务器来配置当前数据中心所对应的RDS数据库,在RDS数据库仅允许当前的备数据中心访问的情况下,使得RDS数据库存储业务流量时不会产生交叉流。
采用本申请实施例,主数据中心和备数据中心具有互备关系,主数据中心和备数据中心分别部署了至少一个负载均衡设备,在主数据中心切换至备数据中心的情况下,本可以将传输至主数据中心的业务流量引导至备数据中心,由备数据中心的负载均衡设备对业务流量进行分配,实现业务流量迁移。解决了现有技术中在数据中心故障、不可用时,互联网数据中心中的互联网服务中断的技术问题。
本领域普通技术人员可以理解,图17所示的结构仅为示意,计算机终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(MobileInternetDevices,MID)、PAD等终端设备。图17其并不对上述电子装置的结构造成限定。例如,计算机终端A还可包括比图17中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图17所示不同的配置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
实施例5
本申请的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以用于保存上述实施例1所提供的数据中心间的业务流量控制方法所执行的程序代码。
可选地,在本实施例中,上述存储介质可以位于计算机网络中计算机终端群中的任意一个计算机终端中,或者位于移动终端群中的任意一个移动终端中。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:具有互备关系的主数据中心和备数据中心,主数据中心和备数据中心分别部署了至少一个负载均衡设备,其中,在主数据中心切换至备数据中心的情况下,将传输至主数据中心的业务流量引导至备数据中心,由备数据中心的负载均衡设备对业务流量进行分配。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:通过中间路由器来监测主数据中心,如果监测到主数据中心处于不可用状态,则将主数据中心切换为备数据中心。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:不可用状态至少包括如下任意一种状态:断电状态、故障状态、入侵状态和溢出状态。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:将数据中心集群中优先级高的数据中心设置为主数据中心,优先级低的数据中心设置为备数据中心,其中,在主数据中心切换至备数据中心之前,方法还包括:主数据中心与备数据中心实时同步数据。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:负载均衡设备包括如下任意一种或多种类型:三层负载均衡设备、四层负载均衡设备、五层负载均衡设备、六层负载均衡设备和七层负载均衡设备。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:在负载均衡设备包括四层负载均衡设备的情况下,备数据中心的四层负载均衡设备根据调度策略来选择目标服务器;四层负载均衡设备将业务流量通过LVS集群分配给目标服务器。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:调度策略包括:通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,其中,通过备数据中心的控制服务器来配置调度策略,在任意一个数据中心允许访问每个后端业务群的情况下,LVS集群在多个后端业务服务器中转发业务流量时会产生交叉流。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:在负载均衡设备包括七层负载均衡设备的情况下,备数据中心的七层负载均衡设备根据调度策略来选择目标服务器;七层负载均衡设备将业务流量通过LVS集群分配给目标服务 器。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:调度策略包括:通过检查多个后端业务服务器中的在线状态或资源使用率来确定目标服务器,其中,通过备数据中心的控制服务器来配置调度策略,在多个后端业务群仅允许当前的备数据中心访问的情况下,LVS集群中分配给每个LVS的至少一个具有连接关系的后端业务服务器均不相同,使得多个后端业务服务器中转发业务流量时不会产生交叉流。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:备数据中心的控制服务器来配置当前数据中心所对应的RDS数据库,在RDS数据库仅允许当前的备数据中心访问的情况下,使得RDS数据库存储业务流量时不会产生交叉流。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的 全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (22)

  1. 一种数据中心间的业务流量控制方法,其特征在于,包括:具有互备关系的主数据中心和备数据中心,所述主数据中心和所述备数据中心分别部署了至少一个负载均衡设备,其中,
    在所述主数据中心切换至所述备数据中心的情况下,将传输至所述主数据中心的业务流量引导至所述备数据中心,由所述备数据中心的负载均衡设备对所述业务流量进行分配。
  2. 根据权利要求1所述的方法,其特征在于,通过中间路由器来监测所述主数据中心,如果监测到所述主数据中心处于不可用状态,则将所述主数据中心切换为所述备数据中心。
  3. 根据权利要求2所述的方法,其特征在于,所述不可用状态至少包括如下任意一种状态:断电状态、故障状态、入侵状态和溢出状态。
  4. 根据权利要求1所述的方法,其特征在于,将数据中心集群中优先级高的数据中心设置为所述主数据中心,优先级低的数据中心设置为所述备数据中心,其中,在所述主数据中心切换至所述备数据中心之前,所述方法还包括:所述主数据中心与所述备数据中心实时同步数据。
  5. 根据权利要求1至4中任意一项所述的方法,其特征在于,所述负载均衡设备包括如下任意一种或多种类型:三层负载均衡设备、四层负载均衡设备、五层负载均衡设备、六层负载均衡设备和七层负载均衡设备。
  6. 根据权利要求5所述的方法,其特征在于,在所述负载均衡设备包括所述四层负载均衡设备的情况下,由所述备数据中心的负载均衡设备对所述业务流量进行分配包括:
    所述备数据中心的所述四层负载均衡设备根据调度策略来选择目标服务器;
    所述四层负载均衡设备将所述业务流量通过LVS集群分配给所述目标服务器。
  7. 根据权利要求6所述的方法,其特征在于,所述调度策略包括:通过检查多个后端业务服务器中的在线状态或资源使用率来确定所述目标服务器,其中,通过所述备数据中心的控制服务器来配置所述调度策略,在任意一个数据中心允许访问每个后端业务群的情况下,所述LVS集群在所述多个后端业务服务器中转发所述业务流量时会产生交叉流。
  8. 根据权利要求5所述的方法,其特征在于,在所述负载均衡设备包括所述七层负 载均衡设备的情况下,由所述备数据中心的负载均衡设备对所述业务流量进行分配包括:
    所述备数据中心的所述七层负载均衡设备根据调度策略来选择目标服务器;
    所述七层负载均衡设备将所述业务流量通过LVS集群分配给所述目标服务器。
  9. 根据权利要求8所述的方法,其特征在于,所述调度策略包括:通过检查多个后端业务服务器中的在线状态或资源使用率来确定所述目标服务器,其中,通过所述备数据中心的控制服务器来配置所述调度策略,在所述多个后端业务群仅允许当前的所述备数据中心访问的情况下,所述LVS集群中分配给每个LVS的至少一个具有连接关系的后端业务服务器均不相同,使得所述多个后端业务服务器中转发所述业务流量时不会产生交叉流。
  10. 根据权利要求5所述的方法,其特征在于,所述备数据中心的控制服务器来配置当前数据中心所对应的RDS数据库,在所述RDS数据库仅允许当前的所述备数据中心访问的情况下,使得所述RDS数据库存储所述业务流量时不会产生交叉流。
  11. 一种数据中心间的业务流量控制系统,其特征在于,包括:
    主数据中心,部署至少一个负载均衡设备,用于接收并转发业务流量;
    备数据中心,与所述主数据中心具有互备关系,并部署了至少一个负载均衡设备,其中,
    在所述主数据中心切换至所述备数据中心的情况下,将所述业务流量引导至所述备数据中心,并由所述备数据中心的负载均衡设备对所述业务流量进行分配。
  12. 根据权利要求11所述的系统,其特征在于,所述系统还包括:
    中间路由器,用于监测所述主数据中心,如果监测到所述主数据中心处于不可用状态,则将所述主数据中心切换为所述备数据中心。
  13. 根据权利要求12所述的系统,其特征在于,所述不可用状态至少包括如下任意一种状态:断电状态、故障状态、入侵状态和溢出状态。
  14. 根据权利要求11至13中任意一项所述的系统,其特征在于,所述负载均衡设备包括如下任意一种或多种类型:三层负载均衡设备、四层负载均衡设备、五层负载均衡设备、六层负载均衡设备和七层负载均衡设备。
  15. 根据权利要求14所述的系统,其特征在于,所述负载均衡设备包括:
    所述四层负载均衡设备,用于根据调度策略来选择目标服务器,并将所述业务流量通过LVS集群分配给所述目标服务器。
  16. 根据权利要求14所述的系统,其特征在于,所述负载均衡设备包括:
    所述七层负载均衡设备,用于根据调度策略来选择目标服务器,并将所述业务流量通过LVS集群分配给所述目标服务器。
  17. 根据权利要求14所述的系统,其特征在于,所述备数据中心还包括:控制服务器,分别与所述四层负载均衡设备和所述七层负载均衡设备连接,用于配置调度策略。
  18. 一种数据中心间的业务流量控制装置,其特征在于,包括:
    控制模块,用于在主数据中心切换至备数据中心的情况下,将传输至所述主数据中心的业务流量引导至所述备数据中心,并由所述备数据中心的负载均衡设备对所述业务流量进行分配,其中,所述主数据中心和所述备数据中心具有互备关系,所述主数据中心和所述备数据中心分别部署至少一个负载均衡设备。
  19. 根据权利要求18所述的装置,其特征在于,所述装置还包括:切换模块,用于监测所述主数据中心,如果监测到所述主数据中心处于不可用状态,则将所述主数据中心切换为所述备数据中心。
  20. 根据权利要求19所述的装置,其特征在于,所述不可用状态至少包括如下任意一种状态:断电状态、故障状态、入侵状态和溢出状态。
  21. 根据权利要求18所述的装置,其特征在于,所述装置还包括:
    设置模块,用于将数据中心集群中优先级高的数据中心设置为所述主数据中心,优先级低的数据中心设置为所述备数据中心;
    同步模块,用于所述主数据中心与所述备数据中心实时同步数据。
  22. 根据权利要求19至21中任意一项所述的装置,其特征在于,所述负载均衡设备包括如下任意一种或多种类型:三层负载均衡设备、四层负载均衡设备、五层负载均衡设备、六层负载均衡设备和七层负载均衡设备。
PCT/CN2017/077807 2016-03-25 2017-03-23 数据中心间的业务流量控制方法、装置及系统 WO2017162184A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP17769461.9A EP3435627A4 (en) 2016-03-25 2017-03-23 METHOD FOR CONTROLLING THE TRAFFIC BETWEEN DATA CENTERS, DEVICE AND SYSTEM
US16/141,844 US20190028538A1 (en) 2016-03-25 2018-09-25 Method, apparatus, and system for controlling service traffic between data centers

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610177065.2A CN107231221B (zh) 2016-03-25 2016-03-25 数据中心间的业务流量控制方法、装置及系统
CN201610177065.2 2016-03-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/141,844 Continuation US20190028538A1 (en) 2016-03-25 2018-09-25 Method, apparatus, and system for controlling service traffic between data centers

Publications (1)

Publication Number Publication Date
WO2017162184A1 true WO2017162184A1 (zh) 2017-09-28

Family

ID=59899340

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/077807 WO2017162184A1 (zh) 2016-03-25 2017-03-23 数据中心间的业务流量控制方法、装置及系统

Country Status (5)

Country Link
US (1) US20190028538A1 (zh)
EP (1) EP3435627A4 (zh)
CN (1) CN107231221B (zh)
TW (1) TWI724106B (zh)
WO (1) WO2017162184A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180191810A1 (en) * 2017-01-05 2018-07-05 Bank Of America Corporation Network Routing Tool
CN111585892A (zh) * 2020-04-29 2020-08-25 平安科技(深圳)有限公司 数据中心流量管控方法和系统
CN111953808A (zh) * 2020-07-31 2020-11-17 上海燕汐软件信息科技有限公司 一种双机双活架构的数据传输切换方法及架构构建系统
CN113472687A (zh) * 2021-07-15 2021-10-01 北京京东振世信息技术有限公司 一种数据处理方法和装置
CN113703953A (zh) * 2020-05-20 2021-11-26 阿里巴巴集团控股有限公司 负载均衡方法、装置、设备和存储介质

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111130835A (zh) * 2018-11-01 2020-05-08 中国移动通信集团河北有限公司 数据中心双活系统、切换方法、装置、设备及介质
CN109813377A (zh) * 2019-03-11 2019-05-28 晟途工业(大连)有限公司 轮胎使用状况自动检测及数据采集系统
CN110166524B (zh) * 2019-04-12 2023-04-07 未鲲(上海)科技服务有限公司 数据中心的切换方法、装置、设备及存储介质
CN112217843B (zh) 2019-07-09 2023-08-22 阿里巴巴集团控股有限公司 服务单元切换方法、系统及设备
CN112351051B (zh) * 2019-08-06 2024-11-15 中兴通讯股份有限公司 云服务处理方法、装置、云服务器、系统及存储介质
US11652724B1 (en) * 2019-10-14 2023-05-16 Amazon Technologies, Inc. Service proxies for automating data center builds
CN110990200B (zh) * 2019-11-26 2022-07-05 苏宁云计算有限公司 一种基于多活数据中心的流量切换方法及装置
CN111881476B (zh) * 2020-07-28 2023-07-28 平安科技(深圳)有限公司 对象存储控制方法、装置、计算机设备及存储介质
CN111934958B (zh) * 2020-07-29 2022-03-29 深圳市高德信通信股份有限公司 一种idc资源调度服务管理平台
CN112291266B (zh) * 2020-11-17 2022-03-29 珠海大横琴科技发展有限公司 一种数据处理的方法、装置、服务器和存储介质
CN112751782B (zh) * 2020-12-29 2022-09-30 微医云(杭州)控股有限公司 基于多活数据中心的流量切换方法、装置、设备及介质
CN112732491B (zh) * 2021-01-22 2024-03-12 中国人民财产保险股份有限公司 数据处理系统、基于数据处理系统的业务数据处理方法
JP7556447B2 (ja) 2021-02-16 2024-09-26 日本電信電話株式会社 通信制御装置、通信制御方法、通信制御プログラム及び通信制御システム
CN112929221A (zh) * 2021-03-02 2021-06-08 浪潮云信息技术股份公司 一种实现云服务产品主备容灾的方法
CN113254205B (zh) * 2021-05-24 2023-08-15 北京百度网讯科技有限公司 负载均衡系统、方法、装置、电子设备及存储介质
CN113703950A (zh) * 2021-09-10 2021-11-26 国泰君安证券股份有限公司 实现服务器集群流量调度处理的系统、方法、装置、处理器及其计算机可读存储介质
CN113873039B (zh) * 2021-09-29 2024-10-22 吉林亿联银行股份有限公司 流量调度的方法、装置、电子设备及存储介质
CN114390059B (zh) * 2021-12-29 2024-02-06 中国电信股份有限公司 一种业务处理系统及业务处理方法
CN114584458B (zh) * 2022-03-03 2023-06-06 平安科技(深圳)有限公司 一种基于etcd的集群容灾管理方法、系统、设备及存储介质
CN115022334B (zh) * 2022-05-13 2024-12-03 深信服科技股份有限公司 流量分配方法、装置、电子设备及存储介质
CN115442369B (zh) * 2022-09-02 2023-06-16 北京星汉未来网络科技有限公司 一种服务资源调度的方法、装置、存储介质及电子设备
CN115865932B (zh) * 2023-02-27 2023-06-23 天翼云科技有限公司 流量调度方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102932271A (zh) * 2012-11-27 2013-02-13 无锡城市云计算中心有限公司 负载均衡的实现方法和装置
CN103259809A (zh) * 2012-02-15 2013-08-21 株式会社日立制作所 负载均衡器、负载均衡方法及分层数据中心系统
CN103647849A (zh) * 2013-12-24 2014-03-19 华为技术有限公司 一种业务迁移方法、装置和一种容灾系统
US20140101656A1 (en) * 2012-10-10 2014-04-10 Zhongwen Zhu Virtual firewall mobility

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6957251B2 (en) * 2001-05-07 2005-10-18 Genworth Financial, Inc. System and method for providing network services using redundant resources
US7747760B2 (en) * 2004-07-29 2010-06-29 International Business Machines Corporation Near real-time data center switching for client requests
US7710865B2 (en) * 2005-02-25 2010-05-04 Cisco Technology, Inc. Disaster recovery for active-standby data center using route health and BGP
US8750093B2 (en) * 2010-08-17 2014-06-10 Ubeeairwalk, Inc. Method and apparatus of implementing an internet protocol signaling concentrator
US8620999B1 (en) * 2011-01-12 2013-12-31 Israel L'Heureux Network resource modification for higher network connection concurrence
US9654601B2 (en) * 2011-03-14 2017-05-16 Verizon Digital Media Services Inc. Network connection hand-off and hand-back
CN103023797B (zh) * 2011-09-23 2016-06-15 百度在线网络技术(北京)有限公司 数据中心系统及装置和提供服务的方法
US20150339200A1 (en) * 2014-05-20 2015-11-26 Cohesity, Inc. Intelligent disaster recovery
CA2901223C (en) * 2014-11-17 2017-10-17 Jiongjiong Gu Method for migrating service of data center, apparatus, and system
EP3241114B1 (en) * 2014-12-31 2022-02-16 ServiceNow, Inc. Failure resistant distributed computing system
CN104516795A (zh) * 2015-01-15 2015-04-15 浪潮(北京)电子信息产业有限公司 一种数据存取方法及系统
CN105389213A (zh) * 2015-10-26 2016-03-09 珠海格力电器股份有限公司 一种数据中心系统及其配置方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103259809A (zh) * 2012-02-15 2013-08-21 株式会社日立制作所 负载均衡器、负载均衡方法及分层数据中心系统
US20140101656A1 (en) * 2012-10-10 2014-04-10 Zhongwen Zhu Virtual firewall mobility
CN102932271A (zh) * 2012-11-27 2013-02-13 无锡城市云计算中心有限公司 负载均衡的实现方法和装置
CN103647849A (zh) * 2013-12-24 2014-03-19 华为技术有限公司 一种业务迁移方法、装置和一种容灾系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3435627A4 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180191810A1 (en) * 2017-01-05 2018-07-05 Bank Of America Corporation Network Routing Tool
US11102285B2 (en) * 2017-01-05 2021-08-24 Bank Of America Corporation Network routing tool
CN111585892A (zh) * 2020-04-29 2020-08-25 平安科技(深圳)有限公司 数据中心流量管控方法和系统
CN111585892B (zh) * 2020-04-29 2022-08-12 平安科技(深圳)有限公司 数据中心流量管控方法和系统
CN113703953A (zh) * 2020-05-20 2021-11-26 阿里巴巴集团控股有限公司 负载均衡方法、装置、设备和存储介质
CN111953808A (zh) * 2020-07-31 2020-11-17 上海燕汐软件信息科技有限公司 一种双机双活架构的数据传输切换方法及架构构建系统
CN111953808B (zh) * 2020-07-31 2023-08-15 上海燕汐软件信息科技有限公司 一种双机双活架构的数据传输切换方法及架构构建系统
CN113472687A (zh) * 2021-07-15 2021-10-01 北京京东振世信息技术有限公司 一种数据处理方法和装置
CN113472687B (zh) * 2021-07-15 2023-12-05 北京京东振世信息技术有限公司 一种数据处理方法和装置

Also Published As

Publication number Publication date
EP3435627A1 (en) 2019-01-30
EP3435627A4 (en) 2019-04-10
TW201739219A (zh) 2017-11-01
TWI724106B (zh) 2021-04-11
CN107231221B (zh) 2020-10-23
US20190028538A1 (en) 2019-01-24
CN107231221A (zh) 2017-10-03

Similar Documents

Publication Publication Date Title
WO2017162184A1 (zh) 数据中心间的业务流量控制方法、装置及系统
CN110912780B (zh) 一种高可用集群检测方法、系统及受控终端
CN107454155B (zh) 一种基于负载均衡集群的故障处理方法、装置以及系统
US9659075B2 (en) Providing high availability in an active/active appliance cluster
US10148756B2 (en) Latency virtualization in a transport network using a storage area network
US11734138B2 (en) Hot standby method, apparatus, and system
US7609619B2 (en) Active-active data center using RHI, BGP, and IGP anycast for disaster recovery and load distribution
WO2017114017A1 (zh) 实现负载均衡的计算机设备、系统和方法
WO2021217872A1 (zh) 基于虚拟私有云的网关节点的配置方法、装置和介质
CN106549875A (zh) 一种会话管理方法、装置及负载均衡器
JP6389956B2 (ja) ネットワークトラフィックを管理する方法およびシステム
CN104301417B (zh) 一种负载均衡方法及装置
CN104468151A (zh) 一种集群切换时保持tcp会话的系统和方法
US11303701B2 (en) Handling failure at logical routers
CN104243304B (zh) 非全连通拓扑结构的数据处理方法、设备和系统
WO2016065804A1 (zh) 一种流量负载均衡方法及路由设备
CN114500340B (zh) 一种智能调度分布式路径计算方法及系统
CN102970388B (zh) 用于管理外网访问的方法和系统
US20250063017A1 (en) 5g user terminal ip address confirmation method, apparatus and system
Rao et al. High availability and load balancing in SDN controllers
CN114268581B (zh) 一种实现网络设备高可用和负载分担的方法
WO2009015613A1 (fr) Procédé et dispositif servant à mettre en place une reprise sur sinistre
CN117692458B (zh) 一种基于标签的分布式负载均衡实现方法及系统
US12113704B2 (en) Using a routing protocol for network port failover
US12212486B2 (en) Multi-host link aggregation for active-active cluster

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2017769461

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017769461

Country of ref document: EP

Effective date: 20181025

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17769461

Country of ref document: EP

Kind code of ref document: A1