WO2025062433A1 - Method and system of load balancing between capacity manager instances - Google Patents
Method and system of load balancing between capacity manager instances Download PDFInfo
- Publication number
- WO2025062433A1 WO2025062433A1 PCT/IN2024/051795 IN2024051795W WO2025062433A1 WO 2025062433 A1 WO2025062433 A1 WO 2025062433A1 IN 2024051795 W IN2024051795 W IN 2024051795W WO 2025062433 A1 WO2025062433 A1 WO 2025062433A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- instances
- instance
- unit
- healthy
- malfunctioning
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1034—Reaction to server failures by a load balancer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5044—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1029—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
Definitions
- Embodiments of the present disclosure generally relate to the field of network performance management systems. More particularly, embodiments of the present disclosure relate to load balancing between capacity manager (CM) instances to improve network performance.
- CM capacity manager
- Wireless communication technology has rapidly evolved over the past few decades, with each generation bringing significant improvements and advancements.
- the first generation of wireless communication technology was based on antilog technology and offered only voice services.
- 2G second-generation
- 3G third generation
- 4G fourth generation
- the fourth generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security.
- 5G fifth generation
- wireless communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
- the method comprises distributing, by the processing unit, the data traffic based on at least one of a Header based routing procedure, and context-based routing procedure.
- the method comprises distributing, by the processing unit, the data traffic among the plurality of CM instances in a round robin manner.
- the method comprises transmitting, by the transceiver unit, an acknowledgement to the plurality of CM instances.
- the acknowledgement is indicative of distribution of data traffic from the malfunctioning CM instance among the healthy CM instances.
- Another aspect of the present disclosure may relate to a system of load balancing between capacity manager (CM) instances.
- the system comprising a load balancer (LB) unit.
- the load balancer unit further comprises a transceiver unit configured to receive, from an operation and management (0AM) unit, a health status information of a plurality of capacity manager (CM) instances.
- the health status information of the plurality of CM instances is detected by the 0AM unit based on a monitoring of the health status information of the plurality capacity manager (CM) instances.
- the health status information of each of the plurality of CM instances is indicative of one of a healthy CM instance and malfunctioning CM instance.
- the load balancer unit further comprises a processing unit configured to distribute a data traffic from the malfunctioning CM instance among the healthy CM instances. The distribution is based on receiving, from one or more of the healthy CM instances, an indication corresponding to taking an ownership of at least a part of the data traffic from the malfunctioning CM instance.
- FIG. 1 illustrates an exemplary block diagram representation of a management and orchestration (MANO) architecture/ platform, in accordance with exemplary implementations of the present disclosure.
- MANO management and orchestration
- FIG. 3 illustrates an exemplary block diagram of a system for load balancing between capacity manager (CM) instances, in accordance with exemplary implementations of the present disclosure.
- FIG. 4 illustrates an exemplary method flow diagram for load balancing between capacity manager (CM) instances, in accordance with the exemplary embodiments of the present disclosure.
- FIG. 5 illustrates another exemplary method flow diagram for ensuring seamless interaction between a capacity manager (CMM) and a load balancer (LB), in accordance with exemplary embodiments of the present disclosure.
- CMS capacity manager
- LB load balancer
- FIG. 6 illustrates an exemplary system architecture for load balancing between capacity manager (CM) instances, in accordance with the exemplary embodiments of the present disclosure.
- exemplary and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration.
- the subject matter disclosed herein is not limited by such examples.
- any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art.
- the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
- the user device and/or a system as described herein to implement technical features as disclosed in the present disclosure may also comprise a “processor” or “processing unit”, wherein processor refers to any logic circuitry for processing instructions.
- the processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a Digital Signal Processor (DSP) core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
- DSP Digital Signal Processor
- the processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor is a hardware processor.
- All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
- DSP digital signal processor
- ASIC Application Specific Integrated Circuits
- FPGA Field Programmable Gate Array circuits
- the transceiver unit includes at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
- FIG. 1 illustrates an exemplary block diagram representation of a management and orchestration (MANO) architecture/ platform [100], in accordance with exemplary implementation of the present disclosure.
- the MANO architecture [100] is developed for managing telecom cloud infrastructure automatically, managing design or deployment design, managing instantiation of network node(s)/ service(s) etc.
- the MANO architecture [100] deploys the network node(s) in the form of Virtual Network Function (VNF) and Cloud-native/ Container Network Function (CNF).
- VNF Virtual Network Function
- CNF Cloud-native/ Container Network Function
- the system may comprise one or more components of the MANO architecture [100]
- the MANO architecture [100] is used to auto-instantiate the VNFs into the corresponding environment of the present disclosure so that it could help in onboarding other vendor(s) CNFs and VNFs to the platform.
- the MANO architecture comprises a user interface layer [102], a network function virtualization (NFV) and software defined network (SDN) design function module [104], a platforms foundation services module [106], a platform core services module [108] and a platform resource adapters and utilities module [112], All the components are assumed to be connected to each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
- NFV network function virtualization
- SDN software defined network
- the NFV and SDN design function module [104] comprises a VNF lifecycle manager (compute) [1042], a VNF catalogue [1044], anetwork services catalogue [1046], a network slicing and service chaining manager [1048], a physical and virtual resource manager [1050] and a CNF lifecycle manager [1052],
- the VNF lifecycle manager (compute) [1042] may be responsible for deciding on which server of the communication network, the microservice will be instantiated.
- the VNF lifecycle manager (compute) [1042] may manage the overall flow of incoming/ outgoing requests during interaction with the user.
- the VNF lifecycle manager (compute) [1042] may be responsible for determining which sequence to be followed for executing the process.
- the VNF catalogue [1044] stores the metadata of all the VNFs (also CNFs in some cases).
- the network services catalogue [1046] stores the information of the services that need to be run.
- the network slicing and service chaining manager [1048] manages the slicing (an ordered and connected sequence of network service/ network functions (NFs)) that must be applied to a specific networked data packet.
- the physical and virtual resource manager [1050] stores the logical and physical inventory of the VNFs. Just like the VNF lifecycle manager (compute) [1042], the CNF lifecycle manager [1052] may be used for the CNFs lifecycle management.
- the platforms foundation services module [106] comprises a microservices elastic load balancer [1062], an identify & access manager [1064], a command line interface (CLI) [1066], a central logging manager [1068], and an event routing manager [1070],
- the microservices elastic load balancer [1062] may be used for maintaining the load balancing of the request forthe services.
- the identify & access manager [1064] may be used for logging purposes.
- the command line interface (CLI) [1066] may be used to provide commands to execute certain processes which requires changes during the run time.
- the central logging manager [1068] may be responsible for keeping the logs of every service. These logs are generated by the MANO platform [100], These logs are used for debugging purposes.
- the event routing manager [1070] may be responsible for routing the events i.e., the application programming interface (API) hits to the corresponding services.
- API application programming interface
- the platforms core services module [108] comprises NFV infrastructure monitoring manager [1082]; an assure manager [1084]; a performance manager [1086]; a policy execution engine [1088]; a capacity monitoring manager [1090]; a release management (mgmt.) repository [1092]; a configuration manager & a golden configuration template (GCT) [1094]; an NFV platform decision analytics [1096]; a platform NoSQL DB [1098]; a platform schedulers and cron jobs [1100]; a VNF backup & upgrade manager [1102]; a micro service auditor [1104]; and a platform operations, administration and maintenance manager [1106],
- the NFV infrastructure monitoring manager [1082] monitors the infrastructure part of the NFs.
- the assure manager [1084] may be responsible for supervising the alarms the vendor may be generating.
- the performance manager [1086] may be responsible for manging the performance counters.
- the policy execution engine (PEGN) [1088] may be responsible for all the managing the policies.
- the capacity monitoring manager (CMM) [1090] may be responsible for sending the request to the PEGN [1090],
- the release management (mgmt.) repository (RMR) [1092] may be responsible for managing the releases and the images of all the vendor network node.
- the configuration manager & (GCT) [1094] manages the configuration and GCT of all the vendors.
- the NFV platform decision analytics (NPDA) [1096] helps in deciding the priority of using the network resources. It may be further noted that the policy execution engine (PEGN) [1088], the configuration manager & GCT [1094] and the NPDA [1096] work together.
- the platform NoSQL DB [1098] may be a database for storing all the inventory (both physical and logical) as well as the metadata of the VNFs and CNF.
- the platform schedulers and cron jobs [1100] schedules the task such as but not limited to triggering of an event, traverse the network graph etc.
- the VNF backup & upgrade manager [1102] takes backup of the images, binaries of the VNFs and the CNFs and produces those backups on demand in case of server failure.
- the micro service auditor [1104] audits the microservices. For e.g., in a hypothetical case, instances not being instantiated by the MANO architecture [100] using the network resources then the micro service auditor [1104] audits and informs the same so that resources can be released for services running in the MANO architecture [100], thereby assuring the services only run on the MANO platform [100],
- the platform operations, administration and maintenance manager [1106] may be used for newer instances that are spawning.
- the platform resource adapters and utilities module [112] further comprises a platform external API adaptor and gateway [1122]; a generic decoder and indexer (XML, CSV, JSON) [1124]; a docker swarm adaptor [1126]; an OpenStack API adapter [1128]; and a NFV gateway [1130],
- the platform external API adaptor and gateway [1122] may be responsible for handling the external services (to the MANO platform [100]) that requires the network resources.
- the generic decoder and indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system in the XML, CSV, JSON format.
- the docker swarm adaptor [1126] may be the interface provided between the telecom cloud and the MANO architecture [100] for communication.
- the OpenStack API adapter [1128] may be used to connect with the virtual machines (VMs).
- the NFV gateway [1130] may be responsible for providing the path to each services going to/incoming from the MANO architecture [100],
- the computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with bus [202] for processing information.
- the hardware processor [204] may be, for example, a general-purpose microprocessor.
- the computing device [200] may also include a main memory [206], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204],
- the main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions.
- the computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
- ROM read only memory
- a storage device [210] such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions.
- the computing device [200] may be coupled vi the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user.
- An input device [214] including alphanumeric and other keys, touch screen input means, etc.
- a cursor controller [216] such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212].
- the input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane
- the computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine.
- the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206], Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210], Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein.
- hard-wired circuitry may be used in place of or in combination with software instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The present disclosure relates to a method and a system for load balancing between capacity manager (CM) instances The disclosure encompasses receiving, from an operation and management (OAM) unit [306], health status information of plurality of capacity manager (CM) instances [308]. It may be noted that the health status information is detected by the OAM unit [306] based on monitoring and is indicative of healthy or malfunctioning CM instance. The present disclosure then encompasses distributing, a data traffic from the malfunctioning CM instance, among the healthy CM instances. The distribution is based on receiving an indication corresponding to taking an ownership of at least a part of the data traffic from the malfunctioning CM instance.
Description
METHOD AND SYSTEM OF LOAD BALANCING BETWEEN CAPACITY MANAGER INSTANCES
FIELD OF THE DISCLOSURE
[0001] Embodiments of the present disclosure generally relate to the field of network performance management systems. More particularly, embodiments of the present disclosure relate to load balancing between capacity manager (CM) instances to improve network performance.
BACKGROUND
[0002] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0003] Wireless communication technology has rapidly evolved over the past few decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on antilog technology and offered only voice services. However, with the advent of the second-generation (2G) technology, digital communication and data services became possible, and text messaging was introduced The third generation (3G) technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security. Currently, the fifth generation (5G) technology is being deployed, promising even faster data speeds, low latency, and the ability to connect multiple devices simultaneously With each generation, wireless communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0004] Further, wireless communication technology, a capacity manager is a critical component responsible for optimizing the resource utilization and smooth operation of the network. The capacity manger helps in automating resource allocation, monitoring and optimization processes. For enhancing the scalability, reliability, and fault tolerance, various instances of capacity mangers may be used. Due to excessive load at the capacity manager instances, various problems may arise
such as performance degradation, resource exhaustion, inaccurate decision making, and system instability. The performance degradation may be in terms of increased latency, reduced throughput and deteriorated quality of service (QoS). The resource exhaustion may be in terms of CPU overload, memory constraints and storage limitations. The inaccurate decision making may be in terms of incorrect resource allocation, delayed response, and congestion management failures.
[0005] Hence, load balancing is required for distribution of workloads evenly among these instances of capacity managers. To prevent the possibility of failure caused by failed requests due to either excessive traffic on a specific instance or instances in poor health, load balancing is required for managing the instances of capacity managers.
[0006] Accordingly, there exists a need for a solution for routing the traffic load among the capacity managers for effective and efficient load balancing. Further, there is a need for a solution having the ability to support HTTP/HTTPS in a parallel configuration. Further, there exists a need for a solution which ensures routing client requests across all servers in a manner that maximizes speed and capacity utilization. Further, there exists a need for a solution which utilizes headerbased routing which saves time and database hits.
[0007] The present disclosure provides a solution to achieve load balancing between capacity manager (CM) instances.
OBJECTS OF THE DISCLOSURE
[0008] This section is provided to introduce certain objects of the present disclosure in a simplified form that are further described below in the description. In order to overcome at least a few problems associated with the known solutions as provided in the previous section, an object of the present disclosure is to substantially reduce the limitations and drawbacks of the prior arts as described hereinabove.
[0009] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0010] It is an object of the present disclosure for ensuring seamless interaction between capacity manager (CM) instance and Load Balancer (LB).
[0011] It is another object of the present disclosure to route client requests across all servers in a manner that maximizes speed and capacity utilization.
[0012] It is yet another object of the present disclosure to provide header-based routing which saves time and database hits.
[0013] Yet another object of the present disclosure is to provide a configurable support for HTTP/HTTPS in parallel.
SUMMARY OF THE DISCLOSURE
[0014] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0015] An aspect of the present disclosure may relate to a method of load balancing between capacity manager (CM) instances. The method comprises receiving, by a transceiver unit at a load balancer (LB) unit, from an operation and management (0AM) unit, a health status information of a plurality of capacity manager (CM) instances. The health status information of the plurality of CM instances is detected by the 0AM unit based on a monitoring of the health status information of the plurality capacity manager (CM) instances. The health status information of each of the plurality of CM instances is indicative of one of a healthy CM instance and malfunctioning CM instance. The method further comprises distributing, by a processing unit at the LB unit, a data traffic from the malfunctioning CM instance among the healthy CM instances. The distribution is based on receiving, from one or more of the healthy CM instances, an indication corresponding to taking an ownership of at least a part of the data traffic from the malfunctioning CM instance.
[0016] In an exemplary aspect of the present disclosure, the method comprises distributing, by the processing unit, the data traffic based on a timeout indication related to serving a transaction.
[0017] In an exemplary aspect of the present disclosure, post the receiving, from one or more of the healthy CM instances, the indication corresponding to taking the ownership of at least a part of the data traffic from the malfunctioning CM instance, the method comprises fetching, by the
one or more of the healthy CM instances, a state information of the incomplete transaction being served by the malfunctioning CM instance.
[0018] In an exemplary aspect of the present disclosure, the indication is based on a priority assigned by the 0AM unit to each of the healthy CM instances.
[0019] In an exemplary aspect of the present disclosure, the method comprises distributing, by the processing unit, the data traffic based on at least one of a Header based routing procedure, and context-based routing procedure.
[0020] In an exemplary aspect of the present disclosure, the method comprises distributing, by the processing unit, the data traffic among the plurality of CM instances in a round robin manner.
[0021] In an exemplary aspect of the present disclosure, the method comprises maintaining the health status information each of the plurality of CM instances in at least one of a local cache associated with the each of the plurality of CM instances, and a database stored in a storage unit.
[0022] In an exemplary aspect of the present disclosure, the method comprises transmitting, by the transceiver unit, an acknowledgement to the plurality of CM instances. The acknowledgement is indicative of distribution of data traffic from the malfunctioning CM instance among the healthy CM instances.
[0023] In an exemplary aspect of the present disclosure, the method comprising receiving, by the transceiver unit at the LB unit from the 0AM unit, an alert related to one of: an addition of a CM instance and a deletion of a CM instance among the plurality of the CM instances.
[0024] In another exemplary aspect of the present disclosure, the data traffic is distributed, by the processing unit at the LB unit, from the malfunctioning CM instance, among the healthy CM instances, over a CM LB interface.
[0025] Another aspect of the present disclosure may relate to a system of load balancing between capacity manager (CM) instances. The system comprising a load balancer (LB) unit. The load balancer unit further comprises a transceiver unit configured to receive, from an operation and management (0AM) unit, a health status information of a plurality of capacity manager (CM) instances. The health status information of the plurality of CM instances is detected by the 0AM
unit based on a monitoring of the health status information of the plurality capacity manager (CM) instances. The health status information of each of the plurality of CM instances is indicative of one of a healthy CM instance and malfunctioning CM instance. The load balancer unit further comprises a processing unit configured to distribute a data traffic from the malfunctioning CM instance among the healthy CM instances. The distribution is based on receiving, from one or more of the healthy CM instances, an indication corresponding to taking an ownership of at least a part of the data traffic from the malfunctioning CM instance.
[0026] Another aspect of the present disclosure may relate to a non-transitory computer-readable storage medium storing instruction for load balancing between capacity manager (CM) instances, the storage medium comprising executable code which, when executed by one or more units of a system, causes a transceiver unit to receive, from an operation and management (0AM) unit, a health status information of a plurality of capacity manager (CM) instances. The health status information of the plurality of CM instances is detected by the 0AM unit based on a monitoring of the health status information of the plurality capacity manager (CM) instances. The health status information of each of the plurality of CM instances is indicative of one of a healthy CM instance and malfunctioning CM instance. Further, the executed code when executed further causes a processing unit to distribute a data traffic from the malfunctioning CM instance among the healthy CM instances. The distribution is based on receiving, from one or more of the healthy CM instances, an indication corresponding to taking an ownership of at least a part of the data traffic from the malfunctioning CM instance.
DESCRIPTION OF DRAWINGS
[0027] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0028] FIG. 1 illustrates an exemplary block diagram representation of a management and orchestration (MANO) architecture/ platform, in accordance with exemplary implementations of the present disclosure.
[0029] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementations of the present disclosure.
[0030] FIG. 3 illustrates an exemplary block diagram of a system for load balancing between capacity manager (CM) instances, in accordance with exemplary implementations of the present disclosure.
[0031] FIG. 4 illustrates an exemplary method flow diagram for load balancing between capacity manager (CM) instances, in accordance with the exemplary embodiments of the present disclosure.
[0032] FIG. 5 illustrates another exemplary method flow diagram for ensuring seamless interaction between a capacity manager (CMM) and a load balancer (LB), in accordance with exemplary embodiments of the present disclosure.
[0033] FIG. 6 illustrates an exemplary system architecture for load balancing between capacity manager (CM) instances, in accordance with the exemplary embodiments of the present disclosure.
[0034] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0035] In the following description, for the purposes of explanation, various specific details are set forth to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example embodiments of the present disclosure are described below, as illustrated in various
drawings in which like reference numerals refer to the same parts throughout the different drawings.
[0036] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0037] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0038] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
[0039] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
[0040] Further, the user device and/or a system as described herein to implement technical features as disclosed in the present disclosure may also comprise a “processor” or “processing unit”, wherein processor refers to any logic circuitry for processing instructions. The processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a Digital Signal Processor (DSP) core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor is a hardware processor.
[0041] All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[0042] As used herein the transceiver unit includes at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
[0043] Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
[0044] FIG. 1 illustrates an exemplary block diagram representation of a management and orchestration (MANO) architecture/ platform [100], in accordance with exemplary implementation of the present disclosure. The MANO architecture [100] is developed for managing telecom cloud infrastructure automatically, managing design or deployment design, managing instantiation of network node(s)/ service(s) etc. The MANO architecture [100] deploys the network node(s) in the form of Virtual Network Function (VNF) and Cloud-native/ Container Network Function (CNF). The system may comprise one or more components of the MANO architecture [100], The MANO architecture [100] is used to auto-instantiate the VNFs into the corresponding environment of the present disclosure so that it could help in onboarding other vendor(s) CNFs and VNFs to the platform.
[0045] As shown in FIG. 1, the MANO architecture [100] comprises a user interface layer [102], a network function virtualization (NFV) and software defined network (SDN) design function module [104], a platforms foundation services module [106], a platform core services module [108] and a platform resource adapters and utilities module [112], All the components are assumed to be connected to each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
[0046] The NFV and SDN design function module [104] comprises a VNF lifecycle manager (compute) [1042], a VNF catalogue [1044], anetwork services catalogue [1046], a network slicing and service chaining manager [1048], a physical and virtual resource manager [1050] and a CNF lifecycle manager [1052], The VNF lifecycle manager (compute) [1042] may be responsible for deciding on which server of the communication network, the microservice will be instantiated. The VNF lifecycle manager (compute) [1042] may manage the overall flow of incoming/ outgoing requests during interaction with the user. The VNF lifecycle manager (compute) [1042] may be responsible for determining which sequence to be followed for executing the process. For e.g. in an AMF network function of the communication network (such as a 5G network), sequence for execution of processes Pl and P2 etc. The VNF catalogue [1044] stores the metadata of all the VNFs (also CNFs in some cases). The network services catalogue [1046] stores the information of the services that need to be run. The network slicing and service chaining manager [1048] manages the slicing (an ordered and connected sequence of network service/ network functions (NFs)) that must be applied to a specific networked data packet. The physical and virtual resource manager [1050] stores the logical and physical inventory of the VNFs. Just like the VNF lifecycle manager (compute) [1042], the CNF lifecycle manager [1052] may be used for the CNFs lifecycle management.
[0047] The platforms foundation services module [106] comprises a microservices elastic load balancer [1062], an identify & access manager [1064], a command line interface (CLI) [1066], a central logging manager [1068], and an event routing manager [1070], The microservices elastic load balancer [1062] may be used for maintaining the load balancing of the request forthe services. The identify & access manager [1064] may be used for logging purposes. The command line interface (CLI) [1066] may be used to provide commands to execute certain processes which requires changes during the run time. The central logging manager [1068] may be responsible for keeping the logs of every service. These logs are generated by the MANO platform [100], These logs are used for debugging purposes. The event routing manager [1070] may be responsible for
routing the events i.e., the application programming interface (API) hits to the corresponding services.
[0048] The platforms core services module [108] comprises NFV infrastructure monitoring manager [1082]; an assure manager [1084]; a performance manager [1086]; a policy execution engine [1088]; a capacity monitoring manager [1090]; a release management (mgmt.) repository [1092]; a configuration manager & a golden configuration template (GCT) [1094]; an NFV platform decision analytics [1096]; a platform NoSQL DB [1098]; a platform schedulers and cron jobs [1100]; a VNF backup & upgrade manager [1102]; a micro service auditor [1104]; and a platform operations, administration and maintenance manager [1106], The NFV infrastructure monitoring manager [1082] monitors the infrastructure part of the NFs. For e g., any metrics such as CPU utilization by the VNF. The assure manager [1084] may be responsible for supervising the alarms the vendor may be generating. The performance manager [1086] may be responsible for manging the performance counters. The policy execution engine (PEGN) [1088] may be responsible for all the managing the policies. The capacity monitoring manager (CMM) [1090] may be responsible for sending the request to the PEGN [1090], The release management (mgmt.) repository (RMR) [1092] may be responsible for managing the releases and the images of all the vendor network node. The configuration manager & (GCT) [1094] manages the configuration and GCT of all the vendors. The NFV platform decision analytics (NPDA) [1096] helps in deciding the priority of using the network resources. It may be further noted that the policy execution engine (PEGN) [1088], the configuration manager & GCT [1094] and the NPDA [1096] work together. The platform NoSQL DB [1098] may be a database for storing all the inventory (both physical and logical) as well as the metadata of the VNFs and CNF. The platform schedulers and cron jobs [1100] schedules the task such as but not limited to triggering of an event, traverse the network graph etc. The VNF backup & upgrade manager [1102] takes backup of the images, binaries of the VNFs and the CNFs and produces those backups on demand in case of server failure. The micro service auditor [1104] audits the microservices. For e.g., in a hypothetical case, instances not being instantiated by the MANO architecture [100] using the network resources then the micro service auditor [1104] audits and informs the same so that resources can be released for services running in the MANO architecture [100], thereby assuring the services only run on the MANO platform [100], The platform operations, administration and maintenance manager [1106] may be used for newer instances that are spawning.
[0049] The platform resource adapters and utilities module [112] further comprises a platform external API adaptor and gateway [1122]; a generic decoder and indexer (XML, CSV, JSON)
[1124]; a docker swarm adaptor [1126]; an OpenStack API adapter [1128]; and a NFV gateway [1130], The platform external API adaptor and gateway [1122] may be responsible for handling the external services (to the MANO platform [100]) that requires the network resources. The generic decoder and indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system in the XML, CSV, JSON format. The docker swarm adaptor [1126] may be the interface provided between the telecom cloud and the MANO architecture [100] for communication. The OpenStack API adapter [1128] may be used to connect with the virtual machines (VMs). The NFV gateway [1130] may be responsible for providing the path to each services going to/incoming from the MANO architecture [100],
[0050] Referring to FIG. 2, the computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with bus [202] for processing information. The hardware processor [204] may be, for example, a general-purpose microprocessor. The computing device [200] may also include a main memory [206], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204], The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
[0051] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions. The computing device [200] may be coupled vi the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input device [214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor [204], Another type of user input device may be a cursor controller [216], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212], The input device typically has two
degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane
[0052] The computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206], Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210], Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
[0053] The computing device [200] also may include a communication interface [218] coupled to the bus [202], The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222], For example, the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [218] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
[0054] The computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220] and the communication interface [218], In the Internet example, a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], the local network [222], the host [224] and the communication interface [218], The received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
[0055] Further, the system [300] may be implemented using the computing device [200] (as shown in FIG. 2). In an implementation, the computing device [200] may be connected to the system [300] to implement the features of the present disclosure.
[0056] Referring to FIG. 3, an exemplary block diagram of the system [300] for load balancing between capacity manager (CM) instances , is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises at least one load balancer (LB) unit [302], The LB unit [302] of the system [300] may comprise at least one transceiver unit [304], and at least one processing unit [310], Further, the system [300] may be connected with at least one operation and management (0AM) unit [306], a plurality of capacity manager (CM) instances [308], Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the FIG. 3, all units shown within the system [300] should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such number of said units, as required to implement the features of the present disclosure. In an implementation, the system [300] may reside in a server or a network entity. In another implementation, the system [300] may reside partly in the server/ network entity.
[0057] The system [300] is configured for load balancing between the capacity manager (CM) instances, with the help of the interconnection between the components/units of the system [300],
[0058] As would be understood, the load balancing may refer to the process of distributing a set of tasks over a set of resources, with the aim of making their overall processing more efficient. Load balancing can optimize response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle. The CM instances may refer to software components or systems that are specifically designed to monitor, analyse, and optimize the utilization of resources within a given environment. In an example, the CM instances may be similar to the instances of CMM [1090] as provided in the FIG. 1. Also, the CM service may refer to the service for monitoring capacity of network functions (the VNFs and CNFs), in terms of hardware capacity and load capacity such as CPU utilization based on thread count, RAM, throughput of the hardware, etc. In the context of the present disclosure, load balancing is done in order to evenly distribute the load between different instances of the capacity manager.
[0059] Initially, for load balancing between the CM instances, the transceiver unit [304] receives at the LB unit [302], from the 0AM unit [306], a health status information of a plurality of capacity
manager (CM) instances [308], It is to be noted that the health status information is detected by the 0AM unit [306] based on a monitoring of the health status information of the plurality capacity manager (CM) instances [308], It is further noted that the health status information of each of the plurality of CM instances [308] is indicative of one of a healthy CM instance and malfunctioning CM instance. In an implementation of the present disclosure, the LB unit [302] may comprise one or more load balancers such as elastic load balancers (ELBs) [602], The elastic load balancer (ELB) [602] may refer to a scalable and reliable load balancing service that distribute incoming traffic across multiple servers or instances, ensuring optimal performance, scalability, and fault tolerance. Further in an example, the elastic load balancers [602] of the LB unit [302] may also be similar to the microservice elastic load balancers [1062] as provided in FIG. 1.
[0060] The 0AM units [306] are essential components of telecommunication networks that are responsible for managing and monitoring the performance of network elements and services and providing a centralized platform for network operators to efficiently oversee and control various aspects of network operations. In an example, the 0AM unit [306] may be similar to the platform Operations, Administration, and Maintenance Manager [1106] as provided in FIG. 1. It may be noted that the health status information may refer to data which provides real-time insights into the health and performance of network devices and services such as providing information associated with device availability, resource utilization, performance metrics, and fault indications, alerts for hardware failures, software errors, or configuration issues. By monitoring health status information, network administrators can proactively identify and address potential problems, ensuring optimal network performance and reliability. It may be noted that the health status information may be monitored by utilizing various techniques such as using simple network management protocols, and performance management systems for collecting analysing and visualising the health status of the information from the various networks and devices.
[0061] Further, the healthy CM instance is one that is functioning as expected, accurately monitoring and managing resources, and providing valuable insights for network optimization. Furthermore, the malfunctioning CM instance may be the instance which exhibit various issues, such as inaccurate resource monitoring, ineffective resource allocation, delayed response times, frequent errors or warnings, etc.
[0062] The processing unit [310] of the load balancer unit [302] is configured to distribute a data traffic from the malfunctioning CM instance among the healthy CM instances. It is emphasized that the distribution is based on receiving, from one or more of the healthy CM instances, an
indication corresponding to taking an ownership of at least a part of the data traffic from the malfunctioning CM instance.
[0063] As would be understood, the data traffic may refer to the flow of digital information over a network, such as the internet or a private network. The data traffic may be related to the capacity of the NFs. For example, the RAM, CPU capacity of AMF instances, or an application, etc. The data traffic is provided to the LB from all instances of all nodes. Further, the indication may refer to a priority -based message based on which the data traffic is distributed. It may be noted that such priority may be based on first come first serve basis. For example, the instance which is healthy and approaches the 0AM first, that instance is given first priority and the 0AM assigns the priority to each CM instance, and accordingly the data traffic is distributed.
[0064] In an exemplary aspect of the present disclosure, the processing unit [310] is configured to distribute the data traffic based on a timeout indication related to serving a transaction. The transaction may refer to a task being performed by the MS instance, for example the transaction may be the request and/or response or the service associated with the instance.
[0065] In an exemplary aspect of the present disclosure, the processing unit [310] is configured to distribute the data traffic based on at least one of a Header based routing procedure, and contextbased routing procedure. The header-based routing procedure may use information contained within the packet header, such as the source IP address, destination IP address, port numbers, or protocol type, to determine the routing path. The context-based routing procedure may consider additional factors beyond packet headers, such as network conditions, application requirements, or policy rules, to determine the routing path.
[0066] In an exemplary aspect of the present disclosure, the processing unit [310] is configured to distribute the data traffic among the plurality of CM instances [308] in a round robin manner. The round robin manner may refer to a simple load balancing algorithm that distributes incoming requests to servers in a circular fashion in which each server may be assigned a weight, and requests may be distributed to servers based on their weight.
[0067] In an exemplary aspect of the present disclosure, the health status information of each of the plurality of CM instances [308] maintained in at least one of a local cache associated with the each of the plurality of CM instances [308], and a database stored in a storage unit [314], As would be known, the local cache may refer to a temporary storage area located on a device or system that
stores frequently accessed data to improve performance. Also, the database may refer to a structured collection of data that is organized in a way that allows for efficient storage, retrieval, and management of information.
[0068] In an exemplary aspect of the present disclosure, the transceiver unit [304] is configured to transmit an acknowledgement to the plurality of CM instances [308], wherein the acknowledgement is indicative of distribution of data traffic from the malfunctioning CM instance among the healthy CM instances.
[0069] In an exemplary aspect of the present disclosure, the transceiver unit [304] is configured to receive, from the 0AM unit [306], an alert related to one of: an addition of a CM instance and a deletion of a CM instance among the plurality of CM instances [308], As would be understood, the alert may be an indication for the addition or the deletion of the CM instance.
[0070] In an exemplary aspect of the present disclosure, wherein post the receiving, from one or more of the healthy CM instances, the indication corresponding to taking the ownership of at least a part of the data traffic from the malfunctioning CM instance, the one or more of the healthy CM instances fetch a state information of the incomplete transaction being served by the malfunctioning CM instance. The state information of the incomplete transaction may relate to the process or the service of the network function. For example, an instance is serving a request which comprises 4 stages: SI, S2, S3, and S4. Now, if the instance starts malfunctioning after processing S2 successfully, that state of the service/process is saved and now the healthy instance will fetch the state of the process and start its function after S3 only. It will not restart the process from beginning.
[0071] In an exemplary aspect of the present disclosure, the indication is based on a priority assigned by the 0AM unit [306] to each of the healthy CM instances.
[0072] Referring to FIG. 4, an exemplary method flow diagram [400] for load balancing between capacity manager (CM) instances, in accordance with exemplary implementations of the present disclosure is shown. In an implementation the method [400] may be performed by the system [300] (as shown in FIG. 3). Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step [402] .
[0073] At step [404], the method [400] comprises receiving, by a transceiver unit [304] at a load balancer (LB) unit [302], from an operation and management (0AM) unit [306], a health status information of a plurality of capacity manager (CM) instances [308], It is to be noted that the health status information of the plurality of CM instances [308] is detected by the OAM unit [306] based on a monitoring of the health status information of the plurality capacity manager (CM) instances [308], It is further noted that the health status information of each of the plurality of CM instances [308] is indicative of one of a healthy CM instance and malfunctioning CM instance.
[0074] At step [406], the method [400] comprises distributing, by a processing unit [310] at the LB unit [302], a data traffic from the malfunctioning CM instance among the healthy CM instances. It is emphasized that the distribution is based on receiving, from one or more of the healthy CM instances, an indication corresponding to taking an ownership of at least a part of the data traffic from the malfunctioning CM instance.
[0075] In an exemplary aspect of the present disclosure, the method [400] comprises distributing, by the processing unit [310], the data traffic based on a timeout indication related to serving a transaction.
[0076] In an exemplary aspect of the present disclosure, wherein post the receiving, from one or more of the healthy CM instances, the indication corresponding to taking the ownership of at least a part of the data traffic from the malfunctioning CM instance, the method [400] comprises fetching, by the one or more of the healthy CM instances, a state information of the incomplete transaction being served by the malfunctioning CM instance.
[0077] In an exemplary aspect of the present disclosure, the indication is based on a priority assigned by the OAM unit [306] to each of the healthy CM instances.
[0078] In an exemplary aspect of the present disclosure, the method [400] comprises distributing, by the processing unit [310], the data traffic based on at least one of a Header based routing procedure, and context-based routing procedure.
[0079] In an exemplary aspect of the present disclosure, the method [400] comprises distributing, by the processing unit [310], the data traffic among the plurality of CM instances [308] in a round robin manner.
[0080] In an exemplary aspect of the present disclosure, the method [400] comprises maintaining the health status information each of the plurality of CM instances [308] in at least one of a local cache [3082] associated with the each of the plurality of CM instances [308], and a database [3142] stored in a storage unit [314],
[0081] In an exemplary aspect of the present disclosure, the method [400] comprises transmitting, by the transceiver unit [304], an acknowledgement to the plurality of CM instances [308], wherein the acknowledgement is indicative of distribution of data traffic from the malfunctioning CM instance among the healthy CM instances.
[0082] In an exemplary aspect of the present disclosure, the method [400] comprising receiving, by the transceiver unit [304] at the LB unit [302] from the 0AM unit [306], an alert related to one of: an addition of a CM instance and a deletion of a CM instance among the plurality of the CM instances [308],
[0083] Thereafter, the method [400] terminates at step [408],
[0084] Referring to FIG. 5, an exemplary method flow diagram [500] for load balancing between capacity manager (CM) instances, in accordance with exemplary implementations of the present disclosure is shown. In an implementation the method [500] may be performed by the system [300] (as shown in FIG. 3). Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure Also, as shown in FIG. 5, the method [500] starts at step [502],
[0085] At step [504], the LB unit [302] receives the health status information of the plurality of capacity manager (CM) instances [308] based on a monitoring of the health status information of the plurality capacity manager (CM) instances [308] and accordingly monitories whether the CM instance is the healthy CM instance or the malfunctioning CM instance.
[0086] Then, at step [506], the 0AM unit [306], alerts the LB unit [302] by sending the alert related to addition or deletion of the CM instances. It may be noted that the CM instances may be added from the plurality of CM instances, and accordingly such CM instances may be deleted from the plurality of CM instances. The plurality of CM instances may be within a cluster of various CM instances.
[0087] Further, at step [508], the LB unit [302] takes the data traffic within the malfunctioning CM instances and distribute the data traffic to the healthy CM instances It may be noted that the distribution is based on the indication which may select a particular instance from the healthy CM instances and distribute the data traffic towards them. Further, in order to distribute the data traffic, the CM LB interface [316] may be used which may be a component used for exchanging information between the CM instances and the LB unit [302], and utilize different communication protocols for exchanging information as may be considered to be obvious to a person skilled in the art. The CM LB interface [316] is interface between LB and each CM instances to transmit, data traffic, transaction, receiving indication, acknowledgements, i.e., all of the interactions between the LB and the CM instances may communicate using interface. Further the CM LB interface [316] may be an HTTP interface which may be located at either LB or the CP instance or may be located at both instances.
[0088] The method [500] herein terminates at step [510],
[0089] Referring to FIG. 6, another exemplary block diagram of a system architecture [600] load balancing between capacity manager (CM) instances is shown in accordance with the exemplary embodiments of the present disclosure. The system architecture [600] comprises a user interface layer [102], an identity and access manager (IAM) [1064], one or more elastic load balancers [602], an event routing manager (ERM) [ 1070], a plurality of CM instances [604], an elastic search cluster [606], an Operations Administration and Maintenance unit [306], and a container network function (CNF) lifecycle manager [1052],
[0090] The user interface layer [102] may be communicatively coupled with the elastic search cluster [606], Also, the user interface layer [102] may be connected with the IAM [1064] and the ERM [1070] using the ELB [602], As would be understood, the ELB are a scalable and reliable load balancing service used to distribute incoming traffic across multiple servers or instances, ensuring optimal performance, scalability, and fault tolerance. Similarly, the ERM [1070] may be connected with the plurality of CM instances [604] using the ELB [602], The elastic search cluster [606] may be connected with the plurality of CM instances [604], The elastic search cluster [606] may refer to a distributed search engine designed to handle large volumes of data and complex search queries.
[0091] Another aspect of the present disclosure may relate to a non-transitory computer-readable storage medium storing instruction for load balancing between capacity manager (CM) instances,
the storage medium comprising executable code which, when executed by one or more units of a system [300], causes a transceiver unit [304] to receive, from an operation and management (0AM) unit [306], a health status information of a plurality of capacity manager (CM) instances [308], The health status information of the plurality of CM instances [308] is detected by the 0AM unit [306] based on a monitoring of the health status information of the plurality capacity manager (CM) instances [308], The health status information of each of the plurality of CM instances [308] is indicative of one of a healthy CM instance and malfunctioning CM instance. Further, the executed code when executed further causes a processing unit [310] to distribute a data traffic from the malfunctioning CM instance among the healthy CM instances. The distribution is based on receiving, from one or more of the healthy CM instances, an indication corresponding to taking an ownership of at least a part of the data traffic from the malfunctioning CM instance.
[0092] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[0093] As is evident from the above, the present disclosure provides a technically advanced solution of load balancing between capacity manager (CM) instances and provide a solution for providing an ability to support HTTP/HTTPS in parallel (Configurable). Further, the present solution ensures routing client requests across all servers in a manner that maximizes speed and capacity utilization and additionally ensures header-based routing which saves time and database hits. Furthermore, the present solution ensures async event-based implementation to utilize interface efficiently and fault tolerance for any failure, this interface works in a high availability mode and if one capacity manager instance went down then next available instance will take care of this request.
[0094] While considerable emphasis has been placed herein on the disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in
the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting
[0095] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably.
While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
Claims
1. A method [400] of load balancing between capacity manager (CM) instances, the method [400] comprising:
- receiving, by a transceiver unit [304] at a load balancer (LB) unit [302], from an operation and management (0AM) unit [306], a health status information of a plurality of capacity manager (CM) instances [308], wherein the health status information of the plurality of CM instances [308] is detected by the 0AM unit [306] based on a monitoring of the health status information of the plurality capacity manager (CM) instances [308], wherein the health status information of each of the plurality of CM instances [308] is indicative of one of a healthy CM instance and malfunctioning CM instance; and
- distributing, by a processing unit [310] at the LB unit [302], a data traffic from the malfunctioning CM instance, among the healthy CM instances, wherein the distribution is based on receiving, from one or more of the healthy CM instances, an indication corresponding to taking an ownership of at least a part of the data traffic from the malfunctioning CM instance.
2. The method [400] as claimed in claim 1, wherein the method [400] comprises distributing, by the processing unit [310], the data traffic based on a timeout indication related to serving a transaction.
3. The method [400] as claimed in claim 1, wherein the method [400] comprises distributing, by the processing unit [310], the data traffic based on at least one of a Header based routing procedure, and context-based routing procedure.
4. The method [400] as claimed in claim 1, wherein the method [400] comprises distributing, by the processing unit [310], the data traffic among the plurality of CM instances [308] in a round robin manner.
5. The method [400] as claimed in claim 1, wherein the method [400] comprises maintaining the health status information each of the plurality of CM instances [308] in at least one of a local cache associated with the each of the plurality of CM instances [308], and a database stored in a storage unit [314],
6. The method [400] as claimed in claim 1, wherein the method [400] comprises transmitting, by the transceiver unit [304], an acknowledgement to the plurality of CM instances [308], wherein the acknowledgement is indicative of distribution of data traffic from the malfunctioning CM instance among the healthy CM instances.
7. The method [400] as claimed in claim 1, the method [400] comprising receiving, by the transceiver unit [304] at the LB unit [302] from the 0AM unit [306], an alert related to one of: an addition of a CM instance and a deletion of a CM instance among the plurality of the CM instances [308],
8. The method [400] as claimed in claim 1, wherein post the receiving, from one or more of the healthy CM instances, the indication corresponding to taking the ownership of at least a part of the data traffic from the malfunctioning CM instance, the method [400] comprises: - fetching, by the one or more of the healthy CM instances, a state information of the incomplete transaction being served by the malfunctioning CM instance.
9. The method [400] as claimed in claim 1, wherein the indication is based on a priority assigned by the 0AM unit [306] to each of the healthy CM instances.
10. The method as claimed in claim 1, wherein the data traffic is distributed, by the processing unit [310] at the LB unit [302], from the malfunctioning CM instance, among the healthy CM instances, over a CM LB interface [316],
11. A system [300] of load balancing between capacity manager (CM) instances, the system [300] comprising a load balancer (LB) unit [302], the load balancer unit [302] further comprising: a transceiver unit [304] configured to:
- receive, from an operation and management (0AM) unit [306], a health status information of a plurality of capacity manager (CM) instances [308], wherein the health status information of the plurality of CM instances [308] is detected by the 0AM unit [306] based on a monitoring of the health status information of the plurality capacity manager (CM) instances [308], wherein the health status information of each of the plurality of CM instances [308] is indicative of one of a healthy CM instance and malfunctioning CM instance; and
- a processing unit [310] configured to:
- distribute a data traffic from the malfunctioning CM instance among the healthy CM instances, wherein the distribution is based on receiving, from one or more of the healthy CM instances, an indication corresponding to taking an ownership of at least a part of the data traffic from the malfunctioning CM instance.
12. The system [300] as claimed in claim 11, wherein the processing unit [310] is configured to distribute the data traffic based on a timeout indication related to serving a transaction.
13. The system [300] as claimed in claim 11, wherein the processing unit [310] is configured to distribute the data traffic based on at least one of a Header based routing procedure, and context-based routing procedure.
14. The system [300] as claimed in claim 11, wherein the processing unit [310] is configured to distribute the data traffic among the plurality of CM instances [308] in a round robin manner.
15. The system [300] as claimed in claim 11, wherein the health status information of each of the plurality of CM instances [308] maintained in at least one of a local cache [3082] associated with the each of the plurality of CM instances [308], and a database stored in a storage unit [314],
16. The system [300] as claimed in claim 11, wherein the transceiver unit [304] is configured to transmit an acknowledgement to the plurality of CM instances [308], wherein the acknowledgement is indicative of distribution of data traffic from the malfunctioning CM instance among the healthy CM instances.
17. The system [300] as claimed in claim 11, wherein the transceiver unit [304] is configured to receive, from the 0AM unit [306], an alert related to one of an addition of a CM instance and a deletion of a CM instance among the plurality of CM instances [308],
18. The system [300] as claimed in claim 11, wherein post the receiving, from one or more of the healthy CM instances, the indication corresponding to taking the ownership of at least a part of the data traffic from the malfunctioning CM instance, the one or more of the
healthy CM instances fetch a state information of the incomplete transaction being served by the malfunctioning CM instance
19. The system [300] as claimed in claim 11, wherein the indication is based on a priority assigned by the 0AM unit [306] to each of the healthy CM instances.
20. The system [300] as claimed in claim 11, wherein the processing unit [310] at the LB unit [302] is further configured to distribute the data traffic, from the malfunctioning CM instance, among the healthy CM instances, over a CM_LB interface [316],
21. A non-transitory computer-readable storage medium storing instruction for load balancing between capacity manager (CM) instances, the storage medium comprising executable code which, when executed by one or more units of a system [300], causes: a transceiver unit [304] to:
- receive, from an operation and management (0AM) unit [306], a health status information of a plurality of capacity manager (CM) instances [308], wherein the health status information of the plurality of CM instances [308] is detected by the 0AM unit [306] based on a monitoring of the health status information of the plurality capacity manager (CM) instances [308], wherein the health status information of each of the plurality of CM instances [308] is indicative of one of a healthy CM instance and malfunctioning CM instance; and a processing unit [310] to:
- distribute a data traffic from the malfunctioning CM instance among the healthy CM instances, wherein the distribution is based on receiving, from one or more of the healthy CM instances, an indication corresponding to taking an ownership of at least a part of the data traffic from the malfunctioning CM instance.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN202321062849 | 2023-09-19 | ||
IN202321062849 | 2023-09-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2025062433A1 true WO2025062433A1 (en) | 2025-03-27 |
Family
ID=95072334
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IN2024/051795 WO2025062433A1 (en) | 2023-09-19 | 2024-09-19 | Method and system of load balancing between capacity manager instances |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2025062433A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050120095A1 (en) * | 2003-12-02 | 2005-06-02 | International Business Machines Corporation | Apparatus and method for determining load balancing weights using application instance statistical information |
US9871712B1 (en) * | 2013-04-16 | 2018-01-16 | Amazon Technologies, Inc. | Health checking in a distributed load balancer |
-
2024
- 2024-09-19 WO PCT/IN2024/051795 patent/WO2025062433A1/en unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050120095A1 (en) * | 2003-12-02 | 2005-06-02 | International Business Machines Corporation | Apparatus and method for determining load balancing weights using application instance statistical information |
US9871712B1 (en) * | 2013-04-16 | 2018-01-16 | Amazon Technologies, Inc. | Health checking in a distributed load balancer |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2025062433A1 (en) | Method and system of load balancing between capacity manager instances | |
WO2025062449A1 (en) | Method and system for evaluating one or more resources for a network function | |
WO2025062450A1 (en) | Method and system for implementing corrective actions during a resource threshold error event | |
WO2025069111A1 (en) | Method and system for managing policies | |
WO2025069084A1 (en) | Method and system for managing one or more instances of policy execution engine (peegn) | |
WO2025062434A1 (en) | Method and system for optimising operations of platform scheduler service | |
WO2025057235A1 (en) | METHOD AND SYSTEM FOR MANAGING OPERATIONAL PARAMETERS OF ONE OR MORE NETWORK FUNCTIONS (NFs) | |
WO2025057236A1 (en) | Method and system for distributing a traffic load using an interface | |
WO2025052486A1 (en) | Method and system for updating policies relating to network functions in network environment | |
WO2025069077A1 (en) | Method and system for distributing data traffic in a network | |
WO2025069098A1 (en) | Method and system to manage virtual network function (vnf) instances in a network | |
WO2025057234A1 (en) | Method and system for implementing one or more corrective actions during an error event | |
WO2025062459A1 (en) | Method and system for providing information relating to network resources in a network environment | |
WO2025062451A1 (en) | Method and system for evaluating a hysteresis for an error event | |
WO2025069101A1 (en) | Method and system for managing network resources | |
WO2025062464A1 (en) | Method and system for routing events in a network environment | |
WO2025069110A1 (en) | Method and system for providing virtual network function information at a policy execution engine | |
WO2025074437A1 (en) | Method and system for monitoring resource usage by network node components | |
WO2025057213A1 (en) | Method and system for managing data related to network functions | |
WO2025069109A1 (en) | Method and system for routing a request through an interface | |
WO2025069055A1 (en) | Method and system for internet protocol (ip) pool management | |
WO2025069108A1 (en) | Method and system for managing inventory of a network | |
WO2025069103A1 (en) | METHOD AND SYSTEM FOR MANAGING OPERATION OF CONTAINER NETWORK FUNCTION COMPONENTS (CNFCs) | |
WO2025074402A1 (en) | Methods and systems for managing auditor instances in a network environment | |
WO2025069085A1 (en) | Method and system for performing one or more operations on one or more network functions in a network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24867729 Country of ref document: EP Kind code of ref document: A1 |