WO2025017726A1 - Method and system for creating a network area - Google Patents
Method and system for creating a network area Download PDFInfo
- Publication number
- WO2025017726A1 WO2025017726A1 PCT/IN2024/051302 IN2024051302W WO2025017726A1 WO 2025017726 A1 WO2025017726 A1 WO 2025017726A1 IN 2024051302 W IN2024051302 W IN 2024051302W WO 2025017726 A1 WO2025017726 A1 WO 2025017726A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- network
- network area
- request
- data
- indexer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/562—Brokering proxy services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
Definitions
- the present disclosure generally relates to a network performance management system. More particularly, the present disclosure relates to a method and system of creating a static network area and a dynamic network area.
- Network performance management systems typically track network elements and data from network monitoring tools and combine and process such data to determine key performance indicators (KPI) of the network.
- KPI key performance indicators
- Integrated performance management systems provide the means to visualize the network performance data so that network operators and other relevant stakeholders are able to identify the service quality of the overall network, and individual/ grouped network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose, and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.
- the existing solutions are inefficient in deriving a required new field in the documents and results from two or more existing fields/information or from a sub-part of an existing field/information available at disposal. Furthermore, such limitations of the existing solutions also lead to inability of these existing solutions to help operations to roll-up and drilldown monitoring of KPI's and counter for their trouble shooting.
- KPI Key Performance Indicator
- CNAs converged network areas
- HNAs hierarchical network areas
- SNAs static network areas
- a method for creating a network area includes receiving, at a User Interface (UI), a request for creating the network area.
- the method includes transmitting, by a load balancer, the request to an integrated performance management (IPM).
- the method includes storing, by the IPM, a data associated with the request at a Distributed Data Lake (DDL).
- the method includes transmitting, by the IPM, the data associated with the request to an Indexer (IN).
- the method includes analysing, by the Indexer, the data associated with the request to create the network area.
- the method includes creating, by the Indexer, the network area based on the analysis of the data associated with the request.
- the method includes enriching, by the indexer, a network data associated with the created network area based on a set of user input received from a user. Thereafter, the method includes uploading, by the indexer, the enriched network data at the DDL for storage.
- the enrichment of the network data is performed in a predefined scheduled interval of time.
- At least the network area comprises at least one of a static network area or a dynamic network area.
- the request to create the network area comprises at least selection of one or more nodes, and one or more categories for which the network area is to be created.
- the set of user input for the enrichment comprises: a first input from the user for selection of at least one existing field from which the new network area is to be derived; a second input from the user for selection of an operation to be executed on the selected at least one existing field.
- the enrichment of the network data based on the set of user input comprises generating, by the indexer, a value corresponding to the executed operation on the selected at least one existing field; mapping, by the indexer, the generated value to a pre-defined value provided within a data set; assigning, by the indexer, the mapped value to the created new network area.
- a system for creating a network area comprising a User Interface (UI), configured to receive a request for creating the network area; a load balancer, configured to transmit the request to an integrated performance management (IPM). Further the IPM is configured to: store a data associated with the request at a Distributed Data Lake (DDL); transmit the data associated with the request to an Indexer (IN). Furthermore, the system comprises an Indexer is configured to: analyse the data associated with the request to create the network area; create the network area based on the analysis of the data associated with the request; enrich a network data associated with the created network based on a set of user input received from a user; and upload the enriched network data at the DDL for storage.
- UI User Interface
- IPM integrated performance management
- IPM integrated performance management
- the IPM is configured to: store a data associated with the request at a Distributed Data Lake (DDL); transmit the data associated with the request to an Indexer (IN).
- the system comprises an Indexer is configured to: analyse the data associated with the request to
- a user equipment (UE) for creating a network area comprising a processor configured to: send, via a User Interface (UI), a request for creating the network area; transmit, via a load balancer, the request to an integrated performance management (IPM); store, via the IPM, the data associated with the request at a Distributed Data Lake (DDL); transmit, via the IPM, the data associated with the request to an Indexer (IN); analyse, via the Indexer, the data associated with the request to create the network area; create, via the Indexer, the network area based on the analysis of the data associated with the request; enrich, via the Indexer, a network data associated with the created network based on a set of user input received from a user; and upload, via the indexer, the enriched network data at the DDL for storage.
- UI User Interface
- IPM integrated performance management
- DDL Distributed Data Lake
- a non-transitory computer-readable storage medium storing instruction for creating a network area
- the storage medium comprising executable code which, when executed by one or more units of a system, causes: a User Interface (UI) [202] to receive a request for creating the network area; a load balancer [100k] to transmit the request to an integrated performance management (IPM) [100a]; the IPM [100a] to: store a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmit the data associated with the request to an Indexer (IN) [208]; and an Indexer [208] to: analyse the data associated with the request to create the network area, create the network area based on the analysis of the data associated with the request, enrich a network data associated with the created network based on a set of user input received from a user, and upload the enriched network data at the DDL [lOOu] for storage.
- UI User Interface
- IPM integrated performance management
- IPM integrated performance management
- IPM integrated
- FIG. 1 illustrates an exemplary block diagram of a network performance management system, in accordance with the exemplary embodiments of the present invention.
- Fig. 2 illustrates an exemplary system for creating a network area i.e., a static network area and a dynamic network area, in accordance with the exemplary embodiments of the present invention.
- Fig. 3 illustrates an exemplary method flow diagram indicating the process for creating a network area i.e., a static network area and a dynamic network area, in accordance with the exemplary embodiments of the present invention.
- Fig. 4 illustrates an exemplary process for creating a network area i.e., a static network area and a dynamic network area, in accordance with the exemplary embodiments of the present invention.
- FIG. 5 illustrates an exemplary block diagram of a computing device upon which an embodiment of the present disclosure may be implemented.
- exemplary and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples.
- any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art.
- a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions.
- a processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
- the processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
- a user equipment may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure.
- the user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure.
- the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
- storage unit or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine.
- a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media.
- the storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
- an indexer refers to a component within the network system that analyses data associated with a user's request to create and enrich network areas.
- the indexer performs data analysis, creates network areas based on the analysed data, enriches the network data by applying user-defined operations to existing fields, and assigns the resulting values to the new network areas.
- the enriched data is then stored in the Distributed Data Lake (DDL) for future use and retrieval.
- DDL Distributed Data Lake
- nodes refer to individual or multiple points within a network that can process or transfer data. These nodes can represent various entities, such as devices, servers, or virtual entities, and are essential components in the creation and management of network areas. The nodes serve as the blocks for network configurations, allowing users to define and categorize different segments of the network based on specific criteria and operations.
- categories refer to classifications or groups within a network that organize nodes or data based on shared characteristics or attributes. These categories help in structuring the network by grouping similar types of data or nodes, facilitating more targeted analysis and management. Users can select these categories when creating network areas, enabling customized and efficient organization of network resources.
- network area refers to a defined segment within a network created for specific analysis or management purposes.
- the network area can encompass static or dynamic configurations and includes selected nodes and categories that are grouped based on user-defined criteria.
- the segmentation allows for focused monitoring, performance assessment, and enrichment of network data, enhancing the ability to drill down or roll up information for comprehensive network management.
- the present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing a solution that can create a static network area and a dynamic network area from the existing information/fields or from or a sub-part of an existing field available at disposal in network systems, wherein the dynamic network area and the static network area are created at least to fulfill a user requirement of a new field in the documents and results.
- Network Areas i.e., the dynamic network area and the static network area can be created at different granularities.
- Fig. 1 illustrates an exemplary block diagram of a network performance management system [100], in accordance with the exemplary embodiments of the present invention.
- the network performance management system [100] comprises various sub-systems such as: integrated performance management system [100a], normalization layer [100b], computation layer [lOOd], anomaly detection layer [lOOo], streaming engine [1001], load balancer [100k], operations and management system [lOOp], API gateway system [lOOr], analysis engine [lOOh], parallel computing framework [lOOi], forecasting engine [lOOt], distributed file system [lOOj], mapping layer [100s], distributed data lake [lOOu], scheduling layer [100g], reporting engine [100m], message broker [lOOe], graph layer [1 OOf], caching layer [100c], service quality manager [lOOq] and correlation engine[100n].
- integrated performance management system [100a] normalization layer [100b], computation layer [lOOd], anomaly detection layer [lOOo], streaming engine [1001], load balancer [100k
- the various components may include:
- Integrated performance management system [100a] comprises of one or more 5G performance engine [lOOv] and one or more 5GKey Performance Indicator (KPI) Engine [100w],
- IPM Integrated performance management
- 5G Performance Management Engine [100v] The 5G Performance Management engine [lOOv] is a crucial component of the integrated system, responsible for collecting, processing, and managing performance counter data from various data sources within the network.
- the gathered data includes metrics such as connection speed, latency, data transfer rates, and many others.
- This raw data is then processed and aggregated as required, forming a comprehensive overview of network performance.
- the processed information is then stored in a Distributed Data Lake [lOOu], a centralized, scalable, and flexible storage solution, allowing for easy access and further analysis.
- the 5G Performance Management engine [lOOv] also enables the reporting and visualization of this performance counter data, thus providing network administrators with a real-time, insightful view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, and make informed decisions to enhance network efficiency and reliability.
- 5G Key Performance Indicator (KPI) Engine [100w]: The 5G Key Performance Indicator (KPI) Engine is a dedicated component tasked with managing the KPIs of all the network elements. It uses the performance counters, which are collected and processed by the 5G Performance Management engine from various data sources. These counters, encapsulating crucial performance data, are harnessed by the KPI engine [lOOu] to calculate essential KPIs. These KPIs might include data throughput, latency, packet loss rate, and more. Once the KPIs are computed, they are segregated based on the aggregation requirements, offering a multi-layered and detailed understanding of network performance.
- the processed KPI data is then stored in the Distributed Data Lake [lOOu], ensuring a highly accessible, centralized, and scalable data repository for further analysis and utilization. Similar to the Performance Management engine, the KPI engine [lOOu] is also responsible for reporting and visualization of KPI data. This functionality allows network administrators to gain a comprehensive, visual understanding of the network's performance, thus supporting informed decision-making and efficient network management.
- the Ingestion layer forms a key part of the Integrated Performance Management system. Its primary function is to establish an environment capable of handling diverse types of incoming data. This data may include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial for maintaining and optimizing the network's performance.
- the Ingestion layer processes it by validating its integrity and correctness to ensure it is fit for further use.
- the data is routed to various components of the system, including the Normalization layer, Streaming Engine, Streaming Analytics, and Message Brokers. The destination is chosen based on where the data is required for further analytics and processing. By serving as the first point of contact for incoming data, the Ingestion layer plays a vital role in managing the data flow within the system, thus supporting comprehensive and accurate network performance analysis.
- Normalization layer [100b] The Normalization Layer [100b] serves to standardize, enrich, and store data into the appropriate databases. It takes in data that has been ingested and adjusts it to a common standard, making it easier to compare and analyse. This process of "normalization” reduces redundancy and improves data integrity. Upon completion of normalization, the data is stored in various databases like the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer [lOOf], depending on its intended use. The choice of storage determines how the data can be accessed and used in the future. Additionally, the Normalization Layer [100b] produces data for the Message Broker, a system that enables communication between different parts of the performance management system through the exchange of data messages.
- Message Broker a system that enables communication between different parts of the performance management system through the exchange of data messages.
- the Normalization Layer [100b] supplies the standardized data to several other subsystems. These include the Analysis Engine for detailed data examination, the Correlation Engine [lOOn] for detecting relationships among various data elements, the Service Quality Manager [ 1 OOq] for maintaining and improving the quality of services, and the Streaming Engine [1001] for processing real-time data streams. These subsystems depend on the normalized data to perform their operations effectively and accurately, demonstrating the Normalization Layer's [100b] critical role in the entire system. [0053] Caching layer [100c]: The Caching Layer [100c] in the Integrated Performance Management system plays a significant role in data management and optimization.
- the Normalization Layer [100b] processes incoming raw data to create a standardized format, enhancing consistency and comparability.
- the Normalizer Layer then inserts this normalized data into various databases.
- One such database is the Caching Layer [100c]
- the Caching Layer [100c] is a high-speed data storage layer which temporarily holds data that is likely to be reused, to improve speed and performance of data retrieval. By storing frequently accessed data in the Caching Layer [100c], the system significantly reduces the time taken to access this data, improving overall system efficiency and performance.
- the Caching Layer [100c] serves as an intermediate layer between the data sources and the sub-systems, such as the Analysis Engine, Correlation Engine [lOOn], Service Quality Manager, and Streaming Engine.
- the Normalization Layer [100b] is responsible for providing these sub-systems with the necessary data from the Caching Layer [100c],
- Computation layer [100d] The Computation Layer [lOOd] in the Integrated Performance Management system serves as the main hub for complex data processing tasks.
- the Normalizer Layer then inserts this standardized data into multiple databases including the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer [lOOf], and also feeds it to the Message Broker [100e],
- the Analysis Engine [lOOh] Correlation Engine [lOOn] Service Quality Manager, and Streaming Engine, utilize the normalized data. These systems are designed to execute various data processing tasks.
- the Analysis Engine performs in-depth data analytics to generate insights from the data.
- the Correlation Engine [lOOn] identifies and understands the relations and patterns within the data.
- the Service Quality Manager assesses and ensures the quality of the services.
- the Streaming Engine processes and analyses the real-time data feeds.
- the Computation Layer [lOOd] is where all major computation and data processing tasks occur. It uses the normalized data provided by the Normalization Layer [100b], processing it to generate useful insights, ensure service quality, understand data patterns, and facilitate real-time data analytics.
- Message broker [100e] The Message Broker [lOOe], an integral part of the Integrated Performance Management system, operates as a publish-subscribe messaging system. It orchestrates and maintains the real-time flow of data from various sources and applications. At its core, the Message Broker [lOOe] facilitates communication between data producers and consumers through message-based topics. This creates an advanced platform for contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-hoc consumers, the Message Broker [lOOe] demonstrates immense flexibility in managing data streams. Moreover, it leverages the filesystem for storage and caching, boosting its speed and efficiency. The design of the Message Broker [lOOe] is centred around reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the data. With its robust design and capabilities, the Message Broker [lOOe] forms a critical component in managing and delivering real-time data in the system.
- Graph layer [100f] The Graph Layer [ 1 OOf], serving as the Relationship Modeler, plays a pivotal role in the Integrated Performance Management system. It can model a variety of data types, including alarm, counter, configuration, CDR data, Infra-metric data, 5G Probe Data, and Inventory data. Equipped with the capability to establish relationships among diverse types of data, the Relationship Modeler offers extensive modelling capabilities. For instance, it can model Alarm and Counter data, Vprobe and Alarm data, elucidating their interrelationships.
- the Modeler should be adept at processing steps provided in the model and delivering the results to the system requested, whether it be a Parallel Computing system, Workflow Engine, Query Engine, Correlation System [lOOn], 5GPerformance Management Engine, or 5GKPI Engine [100u], With its powerful modelling and processing capabilities, the Graph Layer [1 OOf] forms an essential part of the system, enabling the processing and analysis of complex relationships between various types of network data.
- Scheduling layer [100g] serves as a key element of the Integrated Performance Management System, endowed with the ability to execute tasks at predetermined intervals set according to user preferences.
- a task might be an activity performing a service call, an API call to another microservice, the execution of an Elastic Search query, and storing its output in the Distributed Data Lake [lOOu] or Distributed File System or sending it to another micro- service.
- the versatility of the Scheduling Layer [100g] extends to facilitating graph traversals via the Mapping Layer to execute tasks. This crucial capability enables seamless and automated operations within the system, ensuring that various tasks and services are performed on schedule, without manual intervention, enhancing the system's efficiency and performance.
- the Scheduling Layer [100g] orchestrates the systematic and periodic execution of tasks, making it an integral part of the efficient functioning of the entire system.
- Analysis Engine [lOOh] forms a crucial part of the Integrated Performance Management System, designed to provide an environment where users can configure and execute workflows for a wide array of use-cases. This facility aids in the debugging process and facilitates a better understanding of call flows.
- Analysis Engine [1 OOh] users can perform queries on data sourced from various subsystems or external gateways. This capability allows for an in-depth overview of data and aids in pinpointing issues. The system's flexibility allows users to configure specific policies aimed at identifying anomalies within the data.
- the Analysis Engine [lOOh] provides a robust analytical environment for systematic data interrogation, facilitating efficient problem identification and resolution, thereby contributing significantly to the system's overall performance management.
- Parallel Computing Framework [lOOi] is a key aspect of the Integrated Performance Management System, providing a user-friendly yet advanced platform for executing computing tasks in parallel. This framework showcases both scalability and fault tolerance, crucial for managing vast amounts of data. Users can input data via Distributed File System (DFS) [lOOj] locations or Distributed Data Lake (DDL) indices.
- the framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub-System. Each task in a workflow is executed sequentially, but multiple chains can be executed simultaneously, optimizing processing time. To accommodate varying task requirements, the service supports the allocation of specific host lists for different computing tasks.
- SCM Service Configuration Management
- the Parallel Computing Framework [lOOi] is an essential tool for enhancing processing speeds and efficiently managing computing resources, significantly improving the system's performance management capabilities.
- Distributed File System [100j] The Distributed File System (DFS) [lOOj] is a critical component of the Integrated Performance Management System, enabling multiple clients to access and interact with data seamlessly.
- This file system is designed to manage data files that are partitioned into numerous segments known as chunks.
- the DFS [lOOj] effectively allows for the distribution of data across multiple nodes.
- This architecture enhances both the scalability and redundancy of the system, ensuring optimal performance even with large data sets.
- DFS [lOOj] also supports diverse operations, facilitating the flexible interaction with and manipulation of data. This accessibility is paramount for a system that requires constant data input and output, as is the case in a robust performance management system.
- Load Balancer [100k] The Load Balancer (LB) [100k] is a vital component of the Integrated Performance Management System, designed to efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance.
- the LB [100k] implements various routing strategies to manage traffic. These include round-robin scheduling, header-based request dispatch, and contextbased request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and context-based dispatching allow for more intelligent, request-specific routing.
- Header-based dispatching routes requests based on data contained within the headers of the Hypertext Transfer Protocol (HTTP) requests.
- Context-based dispatching routes traffic based on the contextual information about the incoming requests. For example, in an event-driven architecture, the LB [100k] manages event and event acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of the overall performance management system.
- Streaming Engine [1001] The Streaming Engine [1001], also referred to as Stream Analytics, is a critical subsystem in the Integrated Performance Management System. This engine is specifically designed for high-speed data pipelining to the User Interface (UI). Its core objective is to ensure real-time data processing and delivery, enhancing the system's ability to respond promptly to dynamic changes. Data is received from various connected subsystems and processed in real-time by the Streaming Engine [1001], After processing, the data is streamed to the UI, fostering rapid decision-making and responses. The Streaming Engine [1001] cooperates with the Distributed Data Lake [lOOu], Message Broker [lOOe], and Caching Layer [100c] to provide seamless, real-time data flow.
- UI User Interface
- Stream Analytics is designed to perform required computations on incoming data instantly, ensuring that the most relevant and up-to-date information is always available at the UI. Furthermore, this system can also retrieve data from the Distributed Data Lake [lOOu], Message Broker [ 1 OOe], and Caching Layer [100c] as per the requirement and deliver it to the UI in real-time.
- the streaming engine's [1001] ultimate goal is to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the management system.
- Reporting Engine [100m] The Reporting Engine [100m] is a key subsystem of the Integrated Performance Management System.
- the fundamental purpose of designing the Reporting Engine [100m] is to dynamically create report layouts of API data, catered to individual client requirements, and deliver these reports via the Notification Engine (not shown).
- the REM serves as the primary interface for creating custom reports based on the data visualized through the client's dashboard.
- These custom dashboards created by the client through the User Interface (UI), provide the basis for the Reporting Engine [100m] to process and compile data from various interfaces.
- the main output of the Reporting Engine [100m] is a detailed report generated in spreadsheet format.
- the Reporting Engine’s [100m] unique capability to parse data from different subsystem interfaces, process it according to the client's specifications and requirements, and generate a comprehensive report makes it an essential component of this performance management system.
- the Reporting Engine [100m] integrates seamlessly with the Notification Engine (not shown) to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily accessible and usable, thereby improving overall client satisfaction and system usability.
- the present invention focusses on a creation of a network areas i.e., a dynamic network area and a static network area via a user interface (UI), an integrated performance management system (IPMS), an indexer (IN), and a distributed data lake (DDL).
- UI user interface
- IPMS integrated performance management system
- I indexer
- DDL distributed data lake
- the solution as disclosed by the present disclosure is implemented via an exemplary system [200] as shown in Fig. 2 for creating the static network area and the dynamic network area, in accordance with the exemplary embodiments of the present invention, wherein the system [200] works in conjunction with the system [100]
- the dynamic network area refers to flexible and changing network environments based on network conditions in real-time such as, network IP address.
- the static network area refers to fixed network conditions environment set up for providing services in the network.
- Fig. 2 illustrates an exemplary system for creating a network area i.e., static network area and dynamic network area, in accordance with the exemplary embodiments of the present invention.
- the system [200] comprises at least one user interface UI [202], at least one load balancer [100k], at least one integrated performance management (IPM)/ integrated performance management system (IPMS) [100a], at least one indexer (IN) [208], and at least one distributed data lake (DDL) [100u], As shown in Fig.
- IPM integrated performance management
- IPMS integrated performance management system
- DDL distributed data lake
- the devices/components are shown for illustrative purpose, not restricted to shown devices/components only, there may be more devices/components present in the system [200], [0066]
- the UI [202] of the system [200] is configured to receive a request for creating the network area.
- a user or a network administrator may request for creating the network area from the UI [202]
- the request to create the network area comprises at least selection of one or more nodes, and one or more categories for which the network area is to be created i.e., the user may provide at least one of parameters such as, but not limited to, cluster, circle, a number of network node(s) (e.g.
- the UI [202] may be a part of or externally attached to a computing device, smartphone, laptop, human machine interface (HMI) and the like. After receiving the request for creating the network area the user or network administrator may save the created network area.
- HMI human machine interface
- the network area comprises at least one of the static network area and/or the dynamic network area.
- the one or more nodes comprises at least one of servers, switches, databases, and gateways.
- one or more nodes may associate with communication network.
- one or more nodes may associate in the communication network with network functions, such as access and mobility management function (AMF) and session management function (SMF).
- AMF access and mobility management function
- SMF session management function
- one or more nodes comprise servers or databases associated with the AMF and SMF.
- the one or more categories may be at least one of, but not limited to, customer service type, network service establishing type, and premium service type.
- the system [200] further comprises the load balancer [100k], which may distribute the incoming traffic from the UI [202] or other network component/device.
- the load balancer [100k] efficiently routes the traffic to other network components so that network operation optimally maintained, and performance should not be affected.
- the load balancer [100k] is configured to transmit the request to an integrated performance management (IPM) [100a],
- IPM integrated performance management
- the load balancer [100k] may transmit the network traffic from the UI [202] to one of the IPM [100a] (hereinafter also referred to as IPMS unit [100a]), which has low network load.
- the system [200] further comprises the IPM/IPMS unit [100a], which is configured to store a data associated with the request at a Distributed Data Lake (DDL) [lOOu] i.e., the data related to user’s created or configured network area and the defined set of parameters from the UI [202] via the load balancer [100k],
- the IPM/IPMS unit [100a] stores the received network area creation request data and the set of parameters into the DDL [100u]
- the IPM [100a] further configured to store a data associated with the request at a Distributed Data Lake (DDL) [lOOu] i.e., send the received network area request data and the set of parameters to the indexer for the analysing and creating the network area.
- DDL Distributed Data Lake
- the system [200] further comprises the Indexer [208], which is configured to analyse the data associated with the request to create the network area. Further, the indexer [208] may be configured to analyse the set of parameters associated with the network area. Further, the indexer [208] may be configured to create the network area based on the analysis of the data associated with the request i.e., the indexer [208] analyses the user’s request network area data with the set of parameters and creates the network area based on the analysis of the network data associated with the request. Further, the indexer [208] is configured to enrich a network data associated with the created network based on a set of user input received from a user. Further, the enrichment of the network data is performed in a predefined scheduled interval of time.
- the indexer [208] may provide one or more option(s) to receive a set of inputs from the user to enrich a network data associated with the created network area. Thereafter, the indexer [208] is configured to upload the enriched network data at the DDL [lOOu] for storage i.e., to stores the enriched network data into the DDL [100u],
- the set of user input for the enrichment comprises a first input from the user for selection of at least one existing field from which the new network area is to be derived, and a second input from the user for selection of an operation to be executed on the selected at least one existing field.
- the indexer [208] is configured to generate a value corresponding to the executed operation on the selected at least one existing field.
- the indexer [208] is configured map the generated value to a pre-defined value provided within a data set.
- the indexer [208] is configured assign the mapped value to the created new network area.
- the user may select via the UI [202] at least one of existing field such as ‘static network area field’ or ‘dynamic network area field’ from which a new network area is to be derived via the indexer [208], Further, user may provide one or more input for selection of an operation to be executed on the selected at least one existing field.
- user may perform one or more operation such as, but not limited to, concatenating, splitting, or transforming the data in some way. The operation helps transform or manipulate the data within the existing field.
- the indexer [208] Based on the applied operation on the selected existing field, the indexer [208] generates a value and then maps this generated value to a corresponding pre-defined value within a data set.
- user or network administrator may define a pre-defined data set and value in a spreadsheet format. Further, user or network administrator may define predefined mappings or rules that specify how certain or exemplary values should be translated or interpreted in the spreadsheet.
- the indexer [208] assigns the mapped value to the created new network area. In an implementation, the mapped value obtained from the spreadsheet is assigned to the newly created network area. This value represents the desired outcome or characteristic of the network area based on the selected node, category, existing field, and applied operation.
- the indexer [208] stores the data associated with the created new network into the DDL [100u],
- the user selects for which node and category the user wants to create the network area. Then the existing fields (e.g., SNA/HNA/CNA/ etc.) is selected from which the new network area needs to be derived. Thereafter, selecting the operation whose application on the existing field gives a value which would be mapped to a value in a file (e.g., spreadsheet) provided. These values are then assigned to the created network area.
- the existing fields e.g., SNA/HNA/CNA/ etc.
- FIG. 3 an exemplary method flow diagram [300], for creating a network area i.e., a static network area and a dynamic network area, in accordance with exemplary embodiments of the present invention is shown.
- the method [300] is performed by the system [200], As shown in Fig. 3, the method [300] starts at step [302], [0077]
- the method [300] as disclosed by the present disclosure comprises receiving, at a user interface (UI) [202], a request for creating the network area.
- UI user interface
- at least the network area comprises at least one of a static network area or a dynamic network area.
- the request to create the network area comprises at least selection of one or more nodes, and one or more categories for which the network area is to be created.
- the user may request via the UI [202] the request for creating the network area based on one or more network node or one or more category type.
- the category type may be, such as, but not limited to, a customer service type, a service level, and the like.
- the method [300] as disclosed by the present disclosure comprises transmitting, by a load balancer [100k], the request to an integrated performance management (IPM) [100a],
- the method [300] implemented by the system [200] comprises transmitting by the load balancer [100k] the incoming request from the UI [202] to the IPM [100a],
- the load balancer [100k] efficiently routes the traffic to the IPM [100a] or other network components so that network operation optimally maintained, and performance should not be affected.
- the load balancer [100k] may transmit the network traffic from the UI [202] to one of the IPM/IPMS unit [100a], which has low network load.
- the method [300] as disclosed by the present disclosure comprises storing, by the IPM [100a], a data associated with the request at a Distributed Data Lake (DDL) [lOOu],
- the method [300] implemented by the system [200] comprises the IPM [100a], which stores data associated with the request at the DDL [100u],
- the IPM/IPMS unit [100a] may receive a request data related to user’s created or configured network area and the defined set of parameters from the UI [202] via the load balancer [100k],
- the method [300] as disclosed by the present disclosure comprises transmitting, by the IPM [100a], the data associated with the request to an Indexer (IN) [208],
- the method [300] implemented by the system [200] comprises the IPM unit [100a] transmits the data associated with the request to an Indexer (IN) [208] .
- the IPM [ 100a] receives request data related to user created and/or configured network area and the defined set of parameters from the UI [202] via the load balancer [100k],
- the IPM/IPMS unit [100a] transmits the received network area request data and the set of parameters to the indexer [208] for the analysing and creating the network area.
- the method [300] as disclosed by the present disclosure comprises analysing, by the Indexer [208], the data associated with the request to create the network area.
- the method [300] further comprises the indexer [208], wherein the indexer [208] analyses the data associated with the request to create the network area.
- the indexer [208] performs one or more pre-processing or processing operations on an incoming data associated with the request to create the network area with user defined set of parameters, number of nodes, types of category and the like.
- the method [300] as disclosed by the present disclosure comprises creating, by the Indexer [208], the network area based on the analysis of the data associated with the request.
- the method [300] comprises the indexer [208] for creating the network area based on the analysis of the data associated with the request.
- the indexer [208] may create the network area one of type such as, the static network area and the dynamic network area.
- the method [300] as disclosed by the present disclosure comprises enriching, by the indexer [208], a network data associated with the created network area based on a set of user input received from a user.
- the method [300] comprises the indexer [208] for enriching the network data associated with the created network area based on a set of user input receiver from a user.
- the indexer [208] may provide one or more option(s) to receive the set of inputs from the user to enrich the network data associated with the created network area.
- the set of user input received by the indexer [208] for the enrichment comprises a first input from the user for selection of at least one existing field from which the new network area is to be derived and a second input from the user for selection of an operation to be executed on the selected at least one existing field.
- the user may select via the UI [202] at least one of existing field such as ‘static network area field’ or ‘dynamic network area field’ from which a new network area is to be derived via the indexer [208], Further, user may provide one or more input for selection of an operation to be executed on the selected at least one existing field.
- user may perform one or more operation such as, but not limited to, concatenating, splitting, or transforming the data in some way. The operation helps transform or manipulate the data within the existing field.
- the indexer, [208] for the enrichment of the network data based on the set of user input comprises, generating, by the indexer [208], a value corresponding to the executed operation on the selected at least one existing field. Further, the indexer, [208] for the enrichment of the network data based on the set of user input comprises mapping, by the indexer [208], the generated value to a pre-defined value provided within a data set. Thereafter, the indexer, [208] for the enrichment of the network data based on the set of user input comprises assigning, by the indexer [208], the mapped value to the created new network area.
- the indexer [208] may generate the value corresponding to the executed operation (e.g., splitting, concatenating) on the selected at least one existing field (e.g., SNA/CNA/HNA). Further, the indexer [208] maps the generated value to a pre-defined value provided within a data set and assigns, the mapped value to the created new network area. In an exemplary implementation, based on the applied operation on the selected existing field, the indexer [208] generates a value and then maps this generated value to a corresponding pre-defined value within a data set. In an implementation, user or network administrator may define a pre-defined data set and value in a spreadsheet format.
- mappings or rules that specify how certain or exemplary values should be translated or interpreted in the spreadsheet.
- the indexer assigns the mapped value to the created new network area.
- the mapped value obtained from the spreadsheet is assigned to the newly created network area. This value represents the desired outcome or characteristic of the network area based on the selected node, category, existing field, and applied operation.
- the enrichment of the network data is performed in a predefined scheduled interval of time via the indexer [208].
- the user or network administrator may define interval time and at least one of number of network nodes, category types, geographic location, boundary region and the like to perform the enrichment of the network data.
- the method [300] as disclosed by the present disclosure comprises uploading, by the indexer [208], the enriched network data at the DDL [lOOu] for storage.
- the method [300] comprises uploading, by the indexer [208] the enriched network data at the DDL [lOOu] for storage after processing.
- the indexer [208] may store data associated with the created new network area into the DDL [100u],
- FIG. 4 it illustrates an exemplary process [400] for creating a network area i.e., a static network area and a dynamic network area, in accordance with the exemplary embodiments of the present invention.
- the process as depicted in the Fig. 4 is executed by the system [200] in conjunction with the system [100] to create a network area i.e., a dynamic network area and a static network area from the existing information/fields or from or a sub-part of an existing field available at disposal in network systems, wherein the dynamic network area and the static network area are created at least to fulfill a user requirement of a new field or value in the data sets and results.
- the user [402] sends a request to the UI server (such as UI [202]) for network area creation.
- the request may comprise number of nodes and type of category, geographic location for network area creation.
- step S2 the request for network area creation is forwarded to the load balancer [100k],
- the load balancer [100k] checks for available instance with the IPM [100a] for sending the request for network area creation.
- the load balancer [100k] hits the available IPM [100a] instance for sending the request for network area creation.
- the IPM [100a] saves the data into the distributed data lake [lOOu] associated with the request for network area creation.
- the IPM [100a] forwards the data associated with the request for network area creation to the indexer (IN) [208] and subsequently at step S6, the indexer [208] analyses the received data and stores the analysed network area data into the database [100u],
- the indexer [208] is configured to analyse the data associated with the request to create the network area (SNA/CNA/HNA) with set of parameters such as, one or more network nodes (e.g., servers) and category type (e.g., customer service type).
- the indexer [208] perform enrichment of the network data associated with the created network based on a set of user input received from the user [402], such as, existing fields (e.g. SNA/CNA/HNA) and operations (e.g., splitting, concatenating) from which the new network area is to be derived.
- the indexer [208] may generate new field or value and maps the generated field or value with predefined data set value and assigns the mapped value to the created new network area and secondly, the indexer [208],
- the indexer [208] may perform scheduled enrichment on the network data associated with the created network based on a set of user input received from a user [402],
- the user [402] (such as network administrator) may define interval time, and at least one of number of network nodes, category types, geographic location, boundary region and the like to perform the enrichment of the network data associated with the created network.
- step S8 and step S9 the IPM [100a] sends via UI [202] to the user [402] for successful creation of network area request, stored information of enrichment data and created new network.
- Fig. 5 illustrates an exemplary block diagram of a computing device [500] (also referred herein as a computer system [500]) upon which an embodiment of the present disclosure may be implemented.
- the computing device [500] implements the method for creating a network area i.e., a dynamic and static network area using the system [200]
- the computing device [500] itself implements the method for creating a network area i.e., a dynamic and static network area using one or more units configured within the computing device [500], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
- the computing device [500] may include a bus [502] or other communication mechanism for communicating information, and a processor [504] coupled with bus [502] for processing information.
- the processor [504] may be, for example, a general purpose microprocessor.
- the computing device [500] may also include a main memory [506], such as a random access memory (RAM), or other dynamic storage device, coupled to the bus [502] for storing information and instructions to be executed by the processor [504],
- the main memory [506] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [504], Such instructions, when stored in non-transitory storage media accessible to the processor [504], render the computing device [500] into a special-purpose machine that is customized to perform the operations specified in the instructions.
- the computing device [500] further includes a read only memory (ROM) [508] or other static storage device coupled to the bus [502] for storing static information and instructions for the processor [504],
- ROM read only memory
- a storage device [510], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [502] for storing information and instructions.
- the computing device [500] may be coupled via the bus [502] to a display [512], such as a cathode ray tube (CRT), for displaying information to a computer user.
- a display [512] such as a cathode ray tube (CRT)
- An input device [514] may be coupled to the bus [502] for communicating information and command selections to the processor [504]
- Another type of user input device may be a cursor controller [516], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [504], and for controlling cursor movement on the display [512]
- This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
- the computing device [500] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [500] causes or programs the computing device [500] to be a special-purpose machine.
- the techniques herein are performed by the computing device [500] in response to the processor [504] executing one or more sequences of one or more instructions contained in the main memory [506], Such instructions may be read into the main memory [506] from another storage medium, such as the storage device [510], Execution of the sequences of instructions contained in the main memory [506] causes the processor [504] to perform the process steps described herein.
- hardwired circuitry may be used in place of or in combination with software instructions.
- the computing device [500] also may include a communication interface [518] coupled to the bus [502],
- the communication interface [518] provides a two-way data communication coupling to a network link [520] that is connected to a local network [522].
- the communication interface [518] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
- the communication interface [518] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
- LAN local area network
- Wireless links may also be implemented.
- the communication interface [518] sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
- the computing device [500] can send messages and receive data, including program code, through the network(s), the network link [520] and the communication interface [518],
- a server [530] might transmit a requested code for an application program through the Internet [528], the ISP [526], the local network [522], the host [524] and the communication interface [518],
- the received code may be executed by the processor [504] as it is received, and/or stored in the storage device [510], or other non-volatile storage for later execution.
- a non-transitory computer-readable storage medium storing instruction for creating a network area
- the storage medium comprising executable code which, when executed by one or more units of a system, causes: a User Interface (UI) [202] to receive a request for creating the network area; a load balancer [100k] to transmit the request to an integrated performance management (TPM) [100a]; the IPM [100a] to: store a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmit the data associated with the request to an Indexer (IN) [208]; and an Indexer [208] to: analyse the data associated with the request to create the network area, create the network area based on the analysis of the data associated with the request, enrich a network data associated with the created network based on a set of user input received from a user, and upload the enriched network data at the DDL [lOOu] for storage.
- UI User Interface
- TPM integrated performance management
- IPM integrated performance management
- IPM Index
- a User Equipment (UE) for creating a network area comprising a processor configured to: send, via a User Interface (UI) [202], a request for creating the network area; transmit, via a load balancer [100k], the request to an integrated performance management (IPM) [100a]; store, via the IPM [100a], a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmit, via the IPM [100a], the data associated with the request to an Indexer (IN) [208]; analyse, via the Indexer [208], the data associated with the request to create the network area; create, via the Indexer [208], the network area based on the analysis of the data associated with the request; enrich, via the Indexer [208], a network data associated with the created network based on a set of user input received from a user; and upload, via the indexer [208], the enriched network data at the DDL [lOOu] for storage.
- UI User Interface
- IPM integrated performance management
- DDL
- the present disclosure provides a technically advanced solution for creating a dynamic network area and a static network area from an existing information/fields or from or a sub-part of the existing field available at disposal in a network systems, wherein the dynamic network area and the static network area are created at least to fulfill a user requirement of a new field in the documents and results.
- the enrichment facility mentioned in the present disclosure for new fields is completely autonomous, scheduled, follows a user- defined rules and takes effect as soon as CNAs, HNAs and SNAs are created.
- the values for the newly create Network Area is decided based on the values of the old existing field.
- the present disclosure provides a mapping between these two by either entering them one by one manually or uploading them using spreadsheet.
- the present disclosure facilitates the user to modify their Network logic in real-time whilst observing the corresponding changes.
- a network Areas i.e., the dynamic network area and the static network area are created at different granularities. It is created for one network node only, for multiple nodes in the network, for one category in a network node, and for selected categories in a network node etc. Hence, this helps in drilling down the information at various levels in the network for enhanced analysis. Also, the solution helps operations to roll-up and drill-down monitoring of KPI’s and counter for their trouble shooting.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present disclosure relates to a method and a system for creating a network area. The disclosure encompasses: receiving, at a User Interface (UI) [202], a request for creating the network area; transmitting, by a load balancer [100k], the request to an integrated performance management (IPM) [100a]; storing, by the IPM [100a], a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmitting, by the IPM [100a], the data associated with the request to an Indexer (IN) [208]; analysing, by the Indexer [208], the data associated with the request to create the network area; creating, by the Indexer [208], the network area based on the analysis of the data associated with the request; enriching, by the indexer [208], a network data associated with the created network area based on a set of user input received from a user; and uploading, by the indexer [208], the enriched network data at the DDL [100u] for storage.
Description
METHOD AND SYSTEM FOR CREATING A NETWORK AREA
FIELD OF INVENTION
[0001] The present disclosure generally relates to a network performance management system. More particularly, the present disclosure relates to a method and system of creating a static network area and a dynamic network area.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Network performance management systems typically track network elements and data from network monitoring tools and combine and process such data to determine key performance indicators (KPI) of the network. Integrated performance management systems provide the means to visualize the network performance data so that network operators and other relevant stakeholders are able to identify the service quality of the overall network, and individual/ grouped network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose, and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.
[0004] In network performance management systems, generally a mix of different kinds of information gives more complete understanding of a scenario. A user might require a new field in the documents and results with the possibilities of being derived from two or more existing fields or from a sub-part of an existing field. Further, over the period of time various solutions have been developed to provide the user with different kinds of information or with required information, from various fields, however, there are certain challenges with existing solutions. For instance, the existing solutions are not efficient in providing to the user a required information or a mix of different kinds of information, thereby leading to a partial or vague understanding of a scenario in the network systems. Moreover, the existing solutions are inefficient in deriving a required new field in the documents and results from two or more existing fields/information or from a sub-part
of an existing field/information available at disposal. Furthermore, such limitations of the existing solutions also lead to inability of these existing solutions to help operations to roll-up and drilldown monitoring of KPI's and counter for their trouble shooting.
[0005] Thus, there exists an imperative need in the art to provide a solution that can overcome these and other limitations of the existing solutions, which the present disclosure aims to address.
OBJECTS OF THE INVENTION
[0006] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0007] It is an object of the present disclosure to provide a solution that create a static network area and a dynamic network area from the existing information/fields or from or a sub-part of an existing field available at disposal in network systems, wherein the dynamic network area and the static network area are created at least to fulfill a user requirement of a new field in the documents and results.
[0008] It is another object of the present disclosure to provide a solution that help operations to roll-up and drill-down monitoring of Key Performance Indicator (KPI)’s and counter for their trouble shooting by providing for fulfilling a user requirement, the dynamic network area, and the static network area from the existing information/fields.
[0009] It is yet another object of the present disclosure to provide a solution that provide an enrichment facility for new fields that is completely autonomous, scheduled, follows user-defined rules, and takes effect as soon as the dynamic network areas (converged network areas (CNAs), and hierarchical network areas (HNAs)) and static network areas (SNAs) are created.
[0010] It is yet another object of the present disclosure to provide a solution for providing the users a facility for auto enrichment.
[0011] It is yet another object of the present disclosure to provide a solution that decide values for the newly created network area based on the values of the old existing field(s), and where a flexibility is given to provide a mapping between these two by either entering them one by one manually or uploading them using a data file such as spreadsheet etc.
[0012] It is yet another object of the present disclosure to provide a solution that helps in drilling down the information at various levels in the network for enhanced analysis.
[0013] It is yet another object of the present disclosure to provide a solution that when needed provide a facility to a user to create network areas from the existing network areas as well as in the same manner and can allow the user to modify their Network logic in real-time whilst observing the corresponding changes.
SUMMARY
[0014] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0015] According to an aspect of the present disclosure, a method for creating a network area is disclosed. The method includes receiving, at a User Interface (UI), a request for creating the network area. Next, the method includes transmitting, by a load balancer, the request to an integrated performance management (IPM). Next, the method includes storing, by the IPM, a data associated with the request at a Distributed Data Lake (DDL). Next, the method includes transmitting, by the IPM, the data associated with the request to an Indexer (IN). Next, the method includes analysing, by the Indexer, the data associated with the request to create the network area. Next, the method includes creating, by the Indexer, the network area based on the analysis of the data associated with the request. Next, the method includes enriching, by the indexer, a network data associated with the created network area based on a set of user input received from a user. Thereafter, the method includes uploading, by the indexer, the enriched network data at the DDL for storage.
[0016] In an exemplary aspect of the present disclosure, the enrichment of the network data is performed in a predefined scheduled interval of time.
[0017] In an exemplary aspect of the present disclosure, at least the network area comprises at least one of a static network area or a dynamic network area.
[0018] In an exemplary aspect of the present disclosure, the request to create the network area comprises at least selection of one or more nodes, and one or more categories for which the network area is to be created.
[0019] In an exemplary aspect of the present disclosure, the set of user input for the enrichment comprises: a first input from the user for selection of at least one existing field from which the new network area is to be derived; a second input from the user for selection of an operation to be executed on the selected at least one existing field.
[0020] In an exemplary aspect of the present disclosure, the enrichment of the network data based on the set of user input comprises generating, by the indexer, a value corresponding to the executed operation on the selected at least one existing field; mapping, by the indexer, the generated value to a pre-defined value provided within a data set; assigning, by the indexer, the mapped value to the created new network area.
[0021] According to another aspect of the present disclosure, a system for creating a network area is disclosed. The system comprising a User Interface (UI), configured to receive a request for creating the network area; a load balancer, configured to transmit the request to an integrated performance management (IPM). Further the IPM is configured to: store a data associated with the request at a Distributed Data Lake (DDL); transmit the data associated with the request to an Indexer (IN). Furthermore, the system comprises an Indexer is configured to: analyse the data associated with the request to create the network area; create the network area based on the analysis of the data associated with the request; enrich a network data associated with the created network based on a set of user input received from a user; and upload the enriched network data at the DDL for storage.
[0022] According to yet another aspect of the present disclosure, a user equipment (UE) for creating a network area is disclosed. The UE comprising a processor configured to: send, via a User Interface (UI), a request for creating the network area; transmit, via a load balancer, the request to an integrated performance management (IPM); store, via the IPM, the data associated with the request at a Distributed Data Lake (DDL); transmit, via the IPM, the data associated with the request to an Indexer (IN); analyse, via the Indexer, the data associated with the request to create the network area; create, via the Indexer, the network area based on the analysis of the data associated with the request; enrich, via the Indexer, a network data associated with the created
network based on a set of user input received from a user; and upload, via the indexer, the enriched network data at the DDL for storage.
[0023] According to yet another aspect of the present disclosure relates to a non-transitory computer-readable storage medium storing instruction for creating a network area, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a User Interface (UI) [202] to receive a request for creating the network area; a load balancer [100k] to transmit the request to an integrated performance management (IPM) [100a]; the IPM [100a] to: store a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmit the data associated with the request to an Indexer (IN) [208]; and an Indexer [208] to: analyse the data associated with the request to create the network area, create the network area based on the analysis of the data associated with the request, enrich a network data associated with the created network based on a set of user input received from a user, and upload the enriched network data at the DDL [lOOu] for storage.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
[0025] Fig. 1 illustrates an exemplary block diagram of a network performance management system, in accordance with the exemplary embodiments of the present invention.
[0026] Fig. 2 illustrates an exemplary system for creating a network area i.e., a static network area and a dynamic network area, in accordance with the exemplary embodiments of the present invention.
[0027] Fig. 3 illustrates an exemplary method flow diagram indicating the process for creating a network area i.e., a static network area and a dynamic network area, in accordance with the exemplary embodiments of the present invention.
[0028] Fig. 4 illustrates an exemplary process for creating a network area i.e., a static network area and a dynamic network area, in accordance with the exemplary embodiments of the present invention.
[0029] Fig. 5 illustrates an exemplary block diagram of a computing device upon which an embodiment of the present disclosure may be implemented.
[0030] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0031] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
[0032] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0033] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[0034] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
[0035] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive — in a manner similar to the term “comprising” as an open transition word — without precluding any additional or other elements.
[0036] As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
[0037] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smartdevice”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of
implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
[0038] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
[0039] As used herein, an indexer refers to a component within the network system that analyses data associated with a user's request to create and enrich network areas. The indexer performs data analysis, creates network areas based on the analysed data, enriches the network data by applying user-defined operations to existing fields, and assigns the resulting values to the new network areas. The enriched data is then stored in the Distributed Data Lake (DDL) for future use and retrieval.
[0040] As used herein, nodes refer to individual or multiple points within a network that can process or transfer data. These nodes can represent various entities, such as devices, servers, or virtual entities, and are essential components in the creation and management of network areas. The nodes serve as the blocks for network configurations, allowing users to define and categorize different segments of the network based on specific criteria and operations.
[0041] As used herein, categories refer to classifications or groups within a network that organize nodes or data based on shared characteristics or attributes. These categories help in structuring the network by grouping similar types of data or nodes, facilitating more targeted analysis and management. Users can select these categories when creating network areas, enabling customized and efficient organization of network resources.
[0042] As used herein, network area refers to a defined segment within a network created for specific analysis or management purposes. The network area can encompass static or dynamic configurations and includes selected nodes and categories that are grouped based on user-defined
criteria. The segmentation allows for focused monitoring, performance assessment, and enrichment of network data, enhancing the ability to drill down or roll up information for comprehensive network management.
[0043] As discussed in the background section, the current known solutions have several shortcomings. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing a solution that can create a static network area and a dynamic network area from the existing information/fields or from or a sub-part of an existing field available at disposal in network systems, wherein the dynamic network area and the static network area are created at least to fulfill a user requirement of a new field in the documents and results. Moreover, based on the implementation of the features of the present disclosure Network Areas i.e., the dynamic network area and the static network area can be created at different granularities. It can be created for one Network Node only, for multiple Nodes in the Network, for one category in a Network Node, and for selected Categories in a Network Node etc. Hence, this helps in drilling down the information at various levels in the Network for enhanced analysis. Also, the solution is executed for the stored values of the counters in the database before displaying the output and it helps operations to roll-up and drill-down monitoring of KPI’s and counter for their trouble shooting.
[0044] Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
[0045] Fig. 1 illustrates an exemplary block diagram of a network performance management system [100], in accordance with the exemplary embodiments of the present invention. Referring to Fig. 1, the network performance management system [100] comprises various sub-systems such as: integrated performance management system [100a], normalization layer [100b], computation layer [lOOd], anomaly detection layer [lOOo], streaming engine [1001], load balancer [100k], operations and management system [lOOp], API gateway system [lOOr], analysis engine [lOOh], parallel computing framework [lOOi], forecasting engine [lOOt], distributed file system [lOOj], mapping layer [100s], distributed data lake [lOOu], scheduling layer [100g], reporting engine [100m], message broker [lOOe], graph layer [1 OOf], caching layer [100c], service quality manager [lOOq] and correlation engine[100n]. Exemplary connections between these subsystems is also as shown in Fig. 1. However, it will be appreciated by those skilled in the art that the present disclosure is not limited to the connections shown in the diagram, and any other connections
between various subsystems that are needed to realise the effects are within the scope of this disclosure.
[0046] Following are the various components of the system [100], the various components may include:
[0047] Integrated performance management system [100a] comprises of one or more 5G performance engine [lOOv] and one or more 5GKey Performance Indicator (KPI) Engine [100w],
[0048] Integrated performance management (IPM) system [100a]: The IPM collects performance counters to visualize the performance counters of a node, creating and analysing the KPI’s, creating counter/KPI’s reports consisting of single or multiple nodes with multiple levels of aggregation.
[0049] 5G Performance Management Engine [100v]: The 5G Performance Management engine [lOOv] is a crucial component of the integrated system, responsible for collecting, processing, and managing performance counter data from various data sources within the network. The gathered data includes metrics such as connection speed, latency, data transfer rates, and many others. This raw data is then processed and aggregated as required, forming a comprehensive overview of network performance. The processed information is then stored in a Distributed Data Lake [lOOu], a centralized, scalable, and flexible storage solution, allowing for easy access and further analysis. The 5G Performance Management engine [lOOv] also enables the reporting and visualization of this performance counter data, thus providing network administrators with a real-time, insightful view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, and make informed decisions to enhance network efficiency and reliability.
[0050] 5G Key Performance Indicator (KPI) Engine [100w]: The 5G Key Performance Indicator (KPI) Engine is a dedicated component tasked with managing the KPIs of all the network elements. It uses the performance counters, which are collected and processed by the 5G Performance Management engine from various data sources. These counters, encapsulating crucial performance data, are harnessed by the KPI engine [lOOu] to calculate essential KPIs. These KPIs might include data throughput, latency, packet loss rate, and more. Once the KPIs are computed, they are segregated based on the aggregation requirements, offering a multi-layered and detailed understanding of network performance. The processed KPI data is then stored in the Distributed
Data Lake [lOOu], ensuring a highly accessible, centralized, and scalable data repository for further analysis and utilization. Similar to the Performance Management engine, the KPI engine [lOOu] is also responsible for reporting and visualization of KPI data. This functionality allows network administrators to gain a comprehensive, visual understanding of the network's performance, thus supporting informed decision-making and efficient network management.
[0051] Ingestion layer [not shown]: The Ingestion layer forms a key part of the Integrated Performance Management system. Its primary function is to establish an environment capable of handling diverse types of incoming data. This data may include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial for maintaining and optimizing the network's performance. Upon receiving this data, the Ingestion layer processes it by validating its integrity and correctness to ensure it is fit for further use. Following validation, the data is routed to various components of the system, including the Normalization layer, Streaming Engine, Streaming Analytics, and Message Brokers. The destination is chosen based on where the data is required for further analytics and processing. By serving as the first point of contact for incoming data, the Ingestion layer plays a vital role in managing the data flow within the system, thus supporting comprehensive and accurate network performance analysis.
[0052] Normalization layer [100b]: The Normalization Layer [100b] serves to standardize, enrich, and store data into the appropriate databases. It takes in data that has been ingested and adjusts it to a common standard, making it easier to compare and analyse. This process of "normalization" reduces redundancy and improves data integrity. Upon completion of normalization, the data is stored in various databases like the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer [lOOf], depending on its intended use. The choice of storage determines how the data can be accessed and used in the future. Additionally, the Normalization Layer [100b] produces data for the Message Broker, a system that enables communication between different parts of the performance management system through the exchange of data messages. Moreover, the Normalization Layer [100b] supplies the standardized data to several other subsystems. These include the Analysis Engine for detailed data examination, the Correlation Engine [lOOn] for detecting relationships among various data elements, the Service Quality Manager [ 1 OOq] for maintaining and improving the quality of services, and the Streaming Engine [1001] for processing real-time data streams. These subsystems depend on the normalized data to perform their operations effectively and accurately, demonstrating the Normalization Layer's [100b] critical role in the entire system.
[0053] Caching layer [100c]: The Caching Layer [100c] in the Integrated Performance Management system plays a significant role in data management and optimization. During the initial phase, the Normalization Layer [100b] processes incoming raw data to create a standardized format, enhancing consistency and comparability. The Normalizer Layer then inserts this normalized data into various databases. One such database is the Caching Layer [100c], The Caching Layer [100c] is a high-speed data storage layer which temporarily holds data that is likely to be reused, to improve speed and performance of data retrieval. By storing frequently accessed data in the Caching Layer [100c], the system significantly reduces the time taken to access this data, improving overall system efficiency and performance. Further, the Caching Layer [100c] serves as an intermediate layer between the data sources and the sub-systems, such as the Analysis Engine, Correlation Engine [lOOn], Service Quality Manager, and Streaming Engine. The Normalization Layer [100b] is responsible for providing these sub-systems with the necessary data from the Caching Layer [100c],
[0054] Computation layer [100d]: The Computation Layer [lOOd] in the Integrated Performance Management system serves as the main hub for complex data processing tasks. In the initial stages, raw data is gathered, normalized, and enriched by the Normalization Layer [100b], The Normalizer Layer then inserts this standardized data into multiple databases including the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer [lOOf], and also feeds it to the Message Broker [100e], Within the Computation Layer [lOOd], several powerful sub-systems such as the Analysis Engine [lOOh], Correlation Engine [lOOn], Service Quality Manager, and Streaming Engine, utilize the normalized data. These systems are designed to execute various data processing tasks. The Analysis Engine performs in-depth data analytics to generate insights from the data. The Correlation Engine [lOOn] identifies and understands the relations and patterns within the data. The Service Quality Manager assesses and ensures the quality of the services. And the Streaming Engine processes and analyses the real-time data feeds. In essence, the Computation Layer [lOOd] is where all major computation and data processing tasks occur. It uses the normalized data provided by the Normalization Layer [100b], processing it to generate useful insights, ensure service quality, understand data patterns, and facilitate real-time data analytics.
[0055] Message broker [100e]: The Message Broker [lOOe], an integral part of the Integrated Performance Management system, operates as a publish-subscribe messaging system. It orchestrates and maintains the real-time flow of data from various sources and applications. At its core, the Message Broker [lOOe] facilitates communication between data producers and consumers
through message-based topics. This creates an advanced platform for contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-hoc consumers, the Message Broker [lOOe] demonstrates immense flexibility in managing data streams. Moreover, it leverages the filesystem for storage and caching, boosting its speed and efficiency. The design of the Message Broker [lOOe] is centred around reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the data. With its robust design and capabilities, the Message Broker [lOOe] forms a critical component in managing and delivering real-time data in the system.
[0056] Graph layer [100f]: The Graph Layer [ 1 OOf], serving as the Relationship Modeler, plays a pivotal role in the Integrated Performance Management system. It can model a variety of data types, including alarm, counter, configuration, CDR data, Infra-metric data, 5G Probe Data, and Inventory data. Equipped with the capability to establish relationships among diverse types of data, the Relationship Modeler offers extensive modelling capabilities. For instance, it can model Alarm and Counter data, Vprobe and Alarm data, elucidating their interrelationships. Moreover, the Modeler should be adept at processing steps provided in the model and delivering the results to the system requested, whether it be a Parallel Computing system, Workflow Engine, Query Engine, Correlation System [lOOn], 5GPerformance Management Engine, or 5GKPI Engine [100u], With its powerful modelling and processing capabilities, the Graph Layer [1 OOf] forms an essential part of the system, enabling the processing and analysis of complex relationships between various types of network data.
[0057] Scheduling layer [100g]: The Scheduling Layer [100g] serves as a key element of the Integrated Performance Management System, endowed with the ability to execute tasks at predetermined intervals set according to user preferences. A task might be an activity performing a service call, an API call to another microservice, the execution of an Elastic Search query, and storing its output in the Distributed Data Lake [lOOu] or Distributed File System or sending it to another micro- service. The versatility of the Scheduling Layer [100g] extends to facilitating graph traversals via the Mapping Layer to execute tasks. This crucial capability enables seamless and automated operations within the system, ensuring that various tasks and services are performed on schedule, without manual intervention, enhancing the system's efficiency and performance. In sum, the Scheduling Layer [100g] orchestrates the systematic and periodic execution of tasks, making it an integral part of the efficient functioning of the entire system.
[0058] Analysis Engine [lOOh] : The Analysis Engine [lOOh] forms a crucial part of the Integrated Performance Management System, designed to provide an environment where users can configure and execute workflows for a wide array of use-cases. This facility aids in the debugging process and facilitates a better understanding of call flows. With the Analysis Engine [1 OOh], users can perform queries on data sourced from various subsystems or external gateways. This capability allows for an in-depth overview of data and aids in pinpointing issues. The system's flexibility allows users to configure specific policies aimed at identifying anomalies within the data. When these policies detect abnormal behaviour or policy breaches, the system sends notifications, ensuring swift and responsive action. In essence, the Analysis Engine [lOOh] provides a robust analytical environment for systematic data interrogation, facilitating efficient problem identification and resolution, thereby contributing significantly to the system's overall performance management.
[0059] Parallel Computing Framework [lOOi] : The Parallel Computing Framework [lOOi] is a key aspect of the Integrated Performance Management System, providing a user-friendly yet advanced platform for executing computing tasks in parallel. This framework showcases both scalability and fault tolerance, crucial for managing vast amounts of data. Users can input data via Distributed File System (DFS) [lOOj] locations or Distributed Data Lake (DDL) indices. The framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub-System. Each task in a workflow is executed sequentially, but multiple chains can be executed simultaneously, optimizing processing time. To accommodate varying task requirements, the service supports the allocation of specific host lists for different computing tasks. The Parallel Computing Framework [lOOi] is an essential tool for enhancing processing speeds and efficiently managing computing resources, significantly improving the system's performance management capabilities.
[0060] Distributed File System [100j]: The Distributed File System (DFS) [lOOj] is a critical component of the Integrated Performance Management System, enabling multiple clients to access and interact with data seamlessly. This file system is designed to manage data files that are partitioned into numerous segments known as chunks. In the context of a network with vast data, the DFS [lOOj] effectively allows for the distribution of data across multiple nodes. This architecture enhances both the scalability and redundancy of the system, ensuring optimal performance even with large data sets. DFS [lOOj] also supports diverse operations, facilitating the flexible interaction with and manipulation of data. This accessibility is paramount for a system
that requires constant data input and output, as is the case in a robust performance management system.
[0061] Load Balancer [100k]: The Load Balancer (LB) [100k] is a vital component of the Integrated Performance Management System, designed to efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance. The LB [100k] implements various routing strategies to manage traffic. These include round-robin scheduling, header-based request dispatch, and contextbased request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and context-based dispatching allow for more intelligent, request-specific routing. Header-based dispatching routes requests based on data contained within the headers of the Hypertext Transfer Protocol (HTTP) requests. Context-based dispatching routes traffic based on the contextual information about the incoming requests. For example, in an event-driven architecture, the LB [100k] manages event and event acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of the overall performance management system.
[0062] Streaming Engine [1001]: The Streaming Engine [1001], also referred to as Stream Analytics, is a critical subsystem in the Integrated Performance Management System. This engine is specifically designed for high-speed data pipelining to the User Interface (UI). Its core objective is to ensure real-time data processing and delivery, enhancing the system's ability to respond promptly to dynamic changes. Data is received from various connected subsystems and processed in real-time by the Streaming Engine [1001], After processing, the data is streamed to the UI, fostering rapid decision-making and responses. The Streaming Engine [1001] cooperates with the Distributed Data Lake [lOOu], Message Broker [lOOe], and Caching Layer [100c] to provide seamless, real-time data flow. Stream Analytics is designed to perform required computations on incoming data instantly, ensuring that the most relevant and up-to-date information is always available at the UI. Furthermore, this system can also retrieve data from the Distributed Data Lake [lOOu], Message Broker [ 1 OOe], and Caching Layer [100c] as per the requirement and deliver it to the UI in real-time. The streaming engine's [1001] ultimate goal is to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the management system.
[0063] Reporting Engine [100m]: The Reporting Engine [100m] is a key subsystem of the Integrated Performance Management System. The fundamental purpose of designing the Reporting Engine [100m] is to dynamically create report layouts of API data, catered to individual client requirements, and deliver these reports via the Notification Engine (not shown). The REM serves as the primary interface for creating custom reports based on the data visualized through the client's dashboard. These custom dashboards, created by the client through the User Interface (UI), provide the basis for the Reporting Engine [100m] to process and compile data from various interfaces. The main output of the Reporting Engine [100m] is a detailed report generated in spreadsheet format. The Reporting Engine’s [100m] unique capability to parse data from different subsystem interfaces, process it according to the client's specifications and requirements, and generate a comprehensive report makes it an essential component of this performance management system. Furthermore, the Reporting Engine [100m] integrates seamlessly with the Notification Engine (not shown) to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily accessible and usable, thereby improving overall client satisfaction and system usability.
[0064] The present invention focusses on a creation of a network areas i.e., a dynamic network area and a static network area via a user interface (UI), an integrated performance management system (IPMS), an indexer (IN), and a distributed data lake (DDL). In order to create the network areas, in an implementation, the solution as disclosed by the present disclosure is implemented via an exemplary system [200] as shown in Fig. 2 for creating the static network area and the dynamic network area, in accordance with the exemplary embodiments of the present invention, wherein the system [200] works in conjunction with the system [100], In an implementation, the dynamic network area refers to flexible and changing network environments based on network conditions in real-time such as, network IP address. In an implementation, the static network area refers to fixed network conditions environment set up for providing services in the network.
[0065] Now, referring to Fig. 2 illustrates an exemplary system for creating a network area i.e., static network area and dynamic network area, in accordance with the exemplary embodiments of the present invention. In an operation, as shown in the Fig. 2, the system [200] comprises at least one user interface UI [202], at least one load balancer [100k], at least one integrated performance management (IPM)/ integrated performance management system (IPMS) [100a], at least one indexer (IN) [208], and at least one distributed data lake (DDL) [100u], As shown in Fig. 2, the devices/components are shown for illustrative purpose, not restricted to shown devices/components only, there may be more devices/components present in the system [200],
[0066] Further, the UI [202] of the system [200] is configured to receive a request for creating the network area. In an implementation, a user or a network administrator may request for creating the network area from the UI [202], Further, the request to create the network area comprises at least selection of one or more nodes, and one or more categories for which the network area is to be created i.e., the user may provide at least one of parameters such as, but not limited to, cluster, circle, a number of network node(s) (e.g. one or more), one or more category type of network nodes (e.g. customer service type, network establishing type), one or more types of network fields (e.g. static network area, dynamic network area), one or more network attributes (e.g. throughput, latency, packet loss rate, and performance counter etc.), one or more geographic locations and boundary region with the request. In an implementation, the UI [202] may be a part of or externally attached to a computing device, smartphone, laptop, human machine interface (HMI) and the like. After receiving the request for creating the network area the user or network administrator may save the created network area.
[0067] In an implementation, the network area comprises at least one of the static network area and/or the dynamic network area. In an implementation, the one or more nodes comprises at least one of servers, switches, databases, and gateways. In an aspect, one or more nodes may associate with communication network. In an implementation, one or more nodes may associate in the communication network with network functions, such as access and mobility management function (AMF) and session management function (SMF). In an implementation, one or more nodes comprise servers or databases associated with the AMF and SMF. Further in an implementation, the one or more categories may be at least one of, but not limited to, customer service type, network service establishing type, and premium service type.
[0068] The system [200] further comprises the load balancer [100k], which may distribute the incoming traffic from the UI [202] or other network component/device. The load balancer [100k] efficiently routes the traffic to other network components so that network operation optimally maintained, and performance should not be affected. The load balancer [100k] is configured to transmit the request to an integrated performance management (IPM) [100a], In an implementation, the load balancer [100k] may transmit the network traffic from the UI [202] to one of the IPM [100a] (hereinafter also referred to as IPMS unit [100a]), which has low network load.
[0069] The system [200] further comprises the IPM/IPMS unit [100a], which is configured to store a data associated with the request at a Distributed Data Lake (DDL) [lOOu] i.e., the data related to user’s created or configured network area and the defined set of parameters from the UI [202] via the load balancer [100k], The IPM/IPMS unit [100a] stores the received network area creation request data and the set of parameters into the DDL [100u], The IPM [100a] further configured to store a data associated with the request at a Distributed Data Lake (DDL) [lOOu] i.e., send the received network area request data and the set of parameters to the indexer for the analysing and creating the network area.
[0070] The system [200] further comprises the Indexer [208], which is configured to analyse the data associated with the request to create the network area. Further, the indexer [208] may be configured to analyse the set of parameters associated with the network area. Further, the indexer [208] may be configured to create the network area based on the analysis of the data associated with the request i.e., the indexer [208] analyses the user’s request network area data with the set of parameters and creates the network area based on the analysis of the network data associated with the request. Further, the indexer [208] is configured to enrich a network data associated with the created network based on a set of user input received from a user. Further, the enrichment of the network data is performed in a predefined scheduled interval of time. The indexer [208] may provide one or more option(s) to receive a set of inputs from the user to enrich a network data associated with the created network area. Thereafter, the indexer [208] is configured to upload the enriched network data at the DDL [lOOu] for storage i.e., to stores the enriched network data into the DDL [100u],
[0071] Further, as disclosed by the present disclosure the set of user input for the enrichment comprises a first input from the user for selection of at least one existing field from which the new network area is to be derived, and a second input from the user for selection of an operation to be executed on the selected at least one existing field. Further, the solution as disclosed by the present disclosure to enrich the network data based on the set of user input, the indexer [208] is configured to generate a value corresponding to the executed operation on the selected at least one existing field. Further, to enrich based on the set of user input the network data the indexer [208] is configured map the generated value to a pre-defined value provided within a data set. Thereafter, to enrich based on the set of user input the network data the indexer [208] is configured assign the mapped value to the created new network area.
[0072] In an implementation, the user may select via the UI [202] at least one of existing field such as ‘static network area field’ or ‘dynamic network area field’ from which a new network area is to be derived via the indexer [208], Further, user may provide one or more input for selection of an operation to be executed on the selected at least one existing field. In an implementation, user may perform one or more operation such as, but not limited to, concatenating, splitting, or transforming the data in some way. The operation helps transform or manipulate the data within the existing field. Based on the applied operation on the selected existing field, the indexer [208] generates a value and then maps this generated value to a corresponding pre-defined value within a data set. In an implementation, user or network administrator may define a pre-defined data set and value in a spreadsheet format. Further, user or network administrator may define predefined mappings or rules that specify how certain or exemplary values should be translated or interpreted in the spreadsheet. In further proceedings, the indexer [208] assigns the mapped value to the created new network area. In an implementation, the mapped value obtained from the spreadsheet is assigned to the newly created network area. This value represents the desired outcome or characteristic of the network area based on the selected node, category, existing field, and applied operation.
[0073] In an implementation, the indexer [208] stores the data associated with the created new network into the DDL [100u],
[0074] Furthermore, based on the implementation of the features of the present disclosure, the user selects for which node and category the user wants to create the network area. Then the existing fields (e.g., SNA/HNA/CNA/ etc.) is selected from which the new network area needs to be derived. Thereafter, selecting the operation whose application on the existing field gives a value which would be mapped to a value in a file (e.g., spreadsheet) provided. These values are then assigned to the created network area.
[0075] It is pertinent to note that the system [200] is capable of implementing the features that are obvious to a person skilled in the art in light of the disclosure as disclosed above and the implementation of the system is not limited to the above disclosure.
[0076] Referring to Fig. 3 an exemplary method flow diagram [300], for creating a network area i.e., a static network area and a dynamic network area, in accordance with exemplary embodiments of the present invention is shown. In an implementation the method [300] is performed by the system [200], As shown in Fig. 3, the method [300] starts at step [302],
[0077] At step [304], the method [300] as disclosed by the present disclosure comprises receiving, at a user interface (UI) [202], a request for creating the network area. In an implementation, of the present disclosure, at least the network area comprises at least one of a static network area or a dynamic network area. In an implementation, of the present disclosure, the request to create the network area comprises at least selection of one or more nodes, and one or more categories for which the network area is to be created. In an implementation, the user may request via the UI [202] the request for creating the network area based on one or more network node or one or more category type. In an implementation, the category type may be, such as, but not limited to, a customer service type, a service level, and the like.
[0078] Next, at step [306], the method [300] as disclosed by the present disclosure comprises transmitting, by a load balancer [100k], the request to an integrated performance management (IPM) [100a], The method [300] implemented by the system [200] comprises transmitting by the load balancer [100k] the incoming request from the UI [202] to the IPM [100a], In an implementation, the load balancer [100k] efficiently routes the traffic to the IPM [100a] or other network components so that network operation optimally maintained, and performance should not be affected. In an implementation, the load balancer [100k] may transmit the network traffic from the UI [202] to one of the IPM/IPMS unit [100a], which has low network load.
[0079] Next, at step [308], the method [300] as disclosed by the present disclosure comprises storing, by the IPM [100a], a data associated with the request at a Distributed Data Lake (DDL) [lOOu], The method [300] implemented by the system [200] comprises the IPM [100a], which stores data associated with the request at the DDL [100u], In an implementation, the IPM/IPMS unit [100a] may receive a request data related to user’s created or configured network area and the defined set of parameters from the UI [202] via the load balancer [100k],
[0080] Next, at step [310], the method [300] as disclosed by the present disclosure comprises transmitting, by the IPM [100a], the data associated with the request to an Indexer (IN) [208], The method [300] implemented by the system [200] comprises the IPM unit [100a] transmits the data associated with the request to an Indexer (IN) [208] . In an implementation, the IPM [ 100a] receives request data related to user created and/or configured network area and the defined set of parameters from the UI [202] via the load balancer [100k], The IPM/IPMS unit [100a] transmits the received network area request data and the set of parameters to the indexer [208] for the analysing and creating the network area.
[0081] Next, at step [312], the method [300] as disclosed by the present disclosure comprises analysing, by the Indexer [208], the data associated with the request to create the network area. The method [300] further comprises the indexer [208], wherein the indexer [208] analyses the data associated with the request to create the network area. The indexer [208] performs one or more pre-processing or processing operations on an incoming data associated with the request to create the network area with user defined set of parameters, number of nodes, types of category and the like.
[0082] Next, at step [314], the method [300] as disclosed by the present disclosure comprises creating, by the Indexer [208], the network area based on the analysis of the data associated with the request. The method [300] comprises the indexer [208] for creating the network area based on the analysis of the data associated with the request. In an implementation, the indexer [208] may create the network area one of type such as, the static network area and the dynamic network area.
[0083] Next, at step [316], the method [300] as disclosed by the present disclosure comprises enriching, by the indexer [208], a network data associated with the created network area based on a set of user input received from a user. The method [300] comprises the indexer [208] for enriching the network data associated with the created network area based on a set of user input receiver from a user. In an implementation, the indexer [208], may provide one or more option(s) to receive the set of inputs from the user to enrich the network data associated with the created network area. Further, the set of user input received by the indexer [208] for the enrichment comprises a first input from the user for selection of at least one existing field from which the new network area is to be derived and a second input from the user for selection of an operation to be executed on the selected at least one existing field. In an implementation, the user may select via the UI [202] at least one of existing field such as ‘static network area field’ or ‘dynamic network area field’ from which a new network area is to be derived via the indexer [208], Further, user may provide one or more input for selection of an operation to be executed on the selected at least one existing field. In an implementation, user may perform one or more operation such as, but not limited to, concatenating, splitting, or transforming the data in some way. The operation helps transform or manipulate the data within the existing field.
[0084] In an implementation, the indexer, [208] for the enrichment of the network data based on the set of user input comprises, generating, by the indexer [208], a value corresponding to the executed operation on the selected at least one existing field. Further, the indexer, [208] for the
enrichment of the network data based on the set of user input comprises mapping, by the indexer [208], the generated value to a pre-defined value provided within a data set. Thereafter, the indexer, [208] for the enrichment of the network data based on the set of user input comprises assigning, by the indexer [208], the mapped value to the created new network area. Further, the indexer [208] may generate the value corresponding to the executed operation (e.g., splitting, concatenating) on the selected at least one existing field (e.g., SNA/CNA/HNA). Further, the indexer [208] maps the generated value to a pre-defined value provided within a data set and assigns, the mapped value to the created new network area. In an exemplary implementation, based on the applied operation on the selected existing field, the indexer [208] generates a value and then maps this generated value to a corresponding pre-defined value within a data set. In an implementation, user or network administrator may define a pre-defined data set and value in a spreadsheet format. Further, user or network administrator may define predefined mappings or rules that specify how certain or exemplary values should be translated or interpreted in the spreadsheet. In further proceedings, the indexer [208] assigns the mapped value to the created new network area. In an implementation, the mapped value obtained from the spreadsheet is assigned to the newly created network area. This value represents the desired outcome or characteristic of the network area based on the selected node, category, existing field, and applied operation.
[0085] In an implementation, the enrichment of the network data is performed in a predefined scheduled interval of time via the indexer [208], The user or network administrator may define interval time and at least one of number of network nodes, category types, geographic location, boundary region and the like to perform the enrichment of the network data.
[0086] Next, at step [318], the method [300] as disclosed by the present disclosure comprises uploading, by the indexer [208], the enriched network data at the DDL [lOOu] for storage. The method [300] comprises uploading, by the indexer [208] the enriched network data at the DDL [lOOu] for storage after processing. In an implementation, the indexer [208] may store data associated with the created new network area into the DDL [100u],
[0087] Thereafter, the method [300] terminates at step [320],
[0088] Further referring to Fig. 4, it illustrates an exemplary process [400] for creating a network area i.e., a static network area and a dynamic network area, in accordance with the exemplary embodiments of the present invention. In an implementation the process as depicted in the Fig. 4 is executed by the system [200] in conjunction with the system [100] to create a network area i.e.,
a dynamic network area and a static network area from the existing information/fields or from or a sub-part of an existing field available at disposal in network systems, wherein the dynamic network area and the static network area are created at least to fulfill a user requirement of a new field or value in the data sets and results.
[0089] For example, at step SI, the user [402] sends a request to the UI server (such as UI [202]) for network area creation. In an exemplary aspect, the request may comprise number of nodes and type of category, geographic location for network area creation.
[0090] Next, at step S2, the request for network area creation is forwarded to the load balancer [100k],
[0091] Next, at step S3, the load balancer [100k] checks for available instance with the IPM [100a] for sending the request for network area creation. The load balancer [100k] hits the available IPM [100a] instance for sending the request for network area creation.
[0092] Next, at step S4, the IPM [100a] saves the data into the distributed data lake [lOOu] associated with the request for network area creation.
[0093] Further, at step S5, the IPM [100a] forwards the data associated with the request for network area creation to the indexer (IN) [208] and subsequently at step S6, the indexer [208] analyses the received data and stores the analysed network area data into the database [100u], The indexer [208] is configured to analyse the data associated with the request to create the network area (SNA/CNA/HNA) with set of parameters such as, one or more network nodes (e.g., servers) and category type (e.g., customer service type).
[0094] Further, in implementation, at step S6 firstly, the indexer [208] perform enrichment of the network data associated with the created network based on a set of user input received from the user [402], such as, existing fields (e.g. SNA/CNA/HNA) and operations (e.g., splitting, concatenating) from which the new network area is to be derived. The indexer [208] may generate new field or value and maps the generated field or value with predefined data set value and assigns the mapped value to the created new network area and secondly, the indexer [208],
[0095] uploads the enriched network data or data associated with the created new network area at the DDL [lOOu] for storage.
[0096] Thereafter, at step S7, the indexer [208] may perform scheduled enrichment on the network data associated with the created network based on a set of user input received from a user [402], The user [402] (such as network administrator) may define interval time, and at least one of number of network nodes, category types, geographic location, boundary region and the like to perform the enrichment of the network data associated with the created network.
[0097] Finally, at step S8 and step S9, the IPM [100a] sends via UI [202] to the user [402] for successful creation of network area request, stored information of enrichment data and created new network.
[0098] Referring to Fig. 5, which illustrates an exemplary block diagram of a computing device [500] (also referred herein as a computer system [500]) upon which an embodiment of the present disclosure may be implemented. In an implementation, the computing device [500] implements the method for creating a network area i.e., a dynamic and static network area using the system [200], In another implementation, the computing device [500] itself implements the method for creating a network area i.e., a dynamic and static network area using one or more units configured within the computing device [500], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
[0099] The computing device [500] may include a bus [502] or other communication mechanism for communicating information, and a processor [504] coupled with bus [502] for processing information. The processor [504] may be, for example, a general purpose microprocessor. The computing device [500] may also include a main memory [506], such as a random access memory (RAM), or other dynamic storage device, coupled to the bus [502] for storing information and instructions to be executed by the processor [504], The main memory [506] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [504], Such instructions, when stored in non-transitory storage media accessible to the processor [504], render the computing device [500] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [500] further includes a read only memory (ROM) [508] or other static storage device coupled to the bus [502] for storing static information and instructions for the processor [504],
[0100] A storage device [510], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [502] for storing information and instructions. The computing device [500]
may be coupled via the bus [502] to a display [512], such as a cathode ray tube (CRT), for displaying information to a computer user. An input device [514], including alphanumeric and other keys, may be coupled to the bus [502] for communicating information and command selections to the processor [504], Another type of user input device may be a cursor controller [516], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [504], and for controlling cursor movement on the display [512], This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
[0101] The computing device [500] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [500] causes or programs the computing device [500] to be a special-purpose machine. According to one embodiment, the techniques herein are performed by the computing device [500] in response to the processor [504] executing one or more sequences of one or more instructions contained in the main memory [506], Such instructions may be read into the main memory [506] from another storage medium, such as the storage device [510], Execution of the sequences of instructions contained in the main memory [506] causes the processor [504] to perform the process steps described herein. In alternative embodiments, hardwired circuitry may be used in place of or in combination with software instructions.
[0102] The computing device [500] also may include a communication interface [518] coupled to the bus [502], The communication interface [518] provides a two-way data communication coupling to a network link [520] that is connected to a local network [522], For example, the communication interface [518] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [518] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [518] sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
[0103] The computing device [500] can send messages and receive data, including program code, through the network(s), the network link [520] and the communication interface [518], In the Internet example, a server [530] might transmit a requested code for an application program through the Internet [528], the ISP [526], the local network [522], the host [524] and the
communication interface [518], The received code may be executed by the processor [504] as it is received, and/or stored in the storage device [510], or other non-volatile storage for later execution.
[0104] Further, in a telecommunications organization implementing the method and system as encompassed by this disclosure in their network performance management system which involves configuration of one or more APIs to collect data from various network equipment vendors, fetches real-time performance data, standardizes it, and stores it in a distributed data lake [100u], The method and system for creating a network area i.e., a dynamic and static network area within a network performance management system enables the company to efficiently monitor and optimize network performance across diverse equipment, reducing downtime and ensuring a seamless experience for their customers.
[0105] According to yet another aspect of the present disclosure relates to a non-transitory computer-readable storage medium storing instruction for creating a network area, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a User Interface (UI) [202] to receive a request for creating the network area; a load balancer [100k] to transmit the request to an integrated performance management (TPM) [100a]; the IPM [100a] to: store a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmit the data associated with the request to an Indexer (IN) [208]; and an Indexer [208] to: analyse the data associated with the request to create the network area, create the network area based on the analysis of the data associated with the request, enrich a network data associated with the created network based on a set of user input received from a user, and upload the enriched network data at the DDL [lOOu] for storage.
[0106] According to yet another aspect of the present disclosure relates to a User Equipment (UE) for creating a network area, comprising a processor configured to: send, via a User Interface (UI) [202], a request for creating the network area; transmit, via a load balancer [100k], the request to an integrated performance management (IPM) [100a]; store, via the IPM [100a], a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmit, via the IPM [100a], the data associated with the request to an Indexer (IN) [208]; analyse, via the Indexer [208], the data associated with the request to create the network area; create, via the Indexer [208], the network area based on the analysis of the data associated with the request; enrich, via the Indexer [208], a network data associated with the created network based on a set of user input received from a user; and upload, via the indexer [208], the enriched network data at the DDL [lOOu] for storage.
[0107] As is evident from the above, the present disclosure provides a technically advanced solution for creating a dynamic network area and a static network area from an existing information/fields or from or a sub-part of the existing field available at disposal in a network systems, wherein the dynamic network area and the static network area are created at least to fulfill a user requirement of a new field in the documents and results. The enrichment facility mentioned in the present disclosure for new fields is completely autonomous, scheduled, follows a user- defined rules and takes effect as soon as CNAs, HNAs and SNAs are created. The values for the newly create Network Area is decided based on the values of the old existing field. Furthermore, the present disclosure provides a mapping between these two by either entering them one by one manually or uploading them using spreadsheet. Furthermore, the present disclosure facilitates the user to modify their Network logic in real-time whilst observing the corresponding changes. Moreover, based on the implementation of the features of the present disclosure a network Areas i.e., the dynamic network area and the static network area are created at different granularities. It is created for one network node only, for multiple nodes in the network, for one category in a network node, and for selected categories in a network node etc. Hence, this helps in drilling down the information at various levels in the network for enhanced analysis. Also, the solution helps operations to roll-up and drill-down monitoring of KPI’s and counter for their trouble shooting.
[0108] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units, as disclosed in the disclosure, should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[0109] While considerable emphasis has been placed herein on the disclosed embodiments, it will be appreciated that many embodiments can be made and that many changes can be made to the embodiments without departing from the principles of the present disclosure. These and other changes in the embodiments of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
Claims
1. A method for creating a network area, comprising: receiving, at a User Interface (UI) [202], a request for creating the network area; transmitting, by a load balancer [100k], the request to an integrated performance management (IPM) [100a]; storing, by the IPM [100a], a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmitting, by the IPM [100a], the data associated with the request to an Indexer (IN) [208]; analysing, by the Indexer [208], the data associated with the request to create the network area; creating, by the Indexer [208], the network area based on the analysis of the data associated with the request; enriching, by the indexer [208], a network data associated with the created network area based on a set of user input received from a user; and uploading, by the indexer [208], the enriched network data at the DDL [lOOu] for storage.
2. The method as claimed in claim 1 , wherein the enrichment of the network data is performed in a predefined scheduled interval of time.
3. The method as claimed in claim 1, wherein at least the network area comprises at least one of a static network area or a dynamic network area.
4. The method as claimed in claim 1 , wherein the request to create the network area comprises at least selection of one or more nodes, and one or more categories for which the network area is to be created.
5. The method as claimed in claim 1, wherein the set of user input for the enrichment comprises: a first input from the user for selection of at least one existing field from which a new network area is to be derived, and a second input from the user for selection of an operation to be executed on the selected at least one existing field.
6. The method as claimed in claim 5, wherein the enrichment of the network data based on the set of user input comprises: generating, by the indexer [208], a value corresponding to the executed operation on the selected at least one existing field, mapping, by the indexer [208], the generated value to a pre-defined value provided within a data set, and assigning, by the indexer [208], the mapped value to the created new network area.
7. A system for creating a network area, comprising: a User Interface (UI) [202], configured to receive a request for creating the network area; a load balancer [100k], configured to transmit the request to an integrated performance management (IPM) [100a]; the IPM [100a], configured to: store a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmit the data associated with the request to an Indexer (IN) [208]; and an Indexer [208], configured to: analyse the data associated with the request to create the network area, create the network area based on the analysis of the data associated with the request, enrich a network data associated with the created network based on a set of user input received from a user, and upload the enriched network data at the DDL [lOOu] for storage.
8. The system as claimed in claim 7, wherein the enrichment of the network data is performed in a predefined scheduled interval of time.
9. The system as claimed in claim 7, wherein at least the network area comprises at least one of a static network area or a dynamic network area.
10. The system as claimed in claim 7, wherein the request to create the network area comprises at least selection of one or more nodes, and one or more categories for which the network area is to be created.
11. The system as claimed in claim 7, wherein the set of user input for the enrichment comprises:
a first input from the user for selection of at least one existing field from which a new network area is to be derived, and a second input from the user for selection of an operation to be executed on the selected at least one existing field.
12. The system as claimed in claim 11, wherein to enrich the network data based on the set of user input, the indexer [208] is configured to: generate a value corresponding to the executed operation on the selected at least one existing field, map the generated value to a pre-defined value provided within a data set, and assign the mapped value to the created new network area.
13. A User Equipment (UE) for creating a network area, comprising a processor configured to: send, via a User Interface (UI) [202], a request for creating the network area; transmit, via a load balancer [100k], the request to an integrated performance management (TPM) [100a]; store, via the IPM [100a], a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmit, via the IPM [100a], the data associated with the request to an Indexer (IN) [208]; analyse, via the Indexer [208], the data associated with the request to create the network area; create, via the Indexer [208], the network area based on the analysis of the data associated with the request; enrich, via the Indexer [208], a network data associated with the created network based on a set of user input received from a user; and upload, via the indexer [208], the enriched network data at the DDL [lOOu] for storage.
14. A non-transitory computer-readable storage medium storing instruction for creating a network area, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a User Interface (UI) [202] to receive a request for creating the network area; a load balancer [100k] to transmit the request to an integrated performance management (IPM) [100a]; the IPM [100a] to: store a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmit the data associated with the request to an Indexer (IN) [208]; and an Indexer [208] to:
analyse the data associated with the request to create the network area, create the network area based on the analysis of the data associated with the request, enrich a network data associated with the created network based on a set of user input received from a user, and upload the enriched network data at the DDL [lOOu] for storage.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IN202321048371 | 2023-07-19 | ||
| IN202321048371 | 2023-07-19 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025017726A1 true WO2025017726A1 (en) | 2025-01-23 |
Family
ID=94281324
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IN2024/051302 Pending WO2025017726A1 (en) | 2023-07-19 | 2024-07-18 | Method and system for creating a network area |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025017726A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3494718A1 (en) * | 2017-01-05 | 2019-06-12 | Huawei Technologies Co., Ltd. | Systems and methods for application-friendly protocol data unit (pdu) session management |
| WO2021034906A1 (en) * | 2019-08-19 | 2021-02-25 | Q Networks, Llc | Methods, systems, kits and apparatuses for providing end-to-end, secured and dedicated fifth generation telecommunication |
-
2024
- 2024-07-18 WO PCT/IN2024/051302 patent/WO2025017726A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3494718A1 (en) * | 2017-01-05 | 2019-06-12 | Huawei Technologies Co., Ltd. | Systems and methods for application-friendly protocol data unit (pdu) session management |
| WO2021034906A1 (en) * | 2019-08-19 | 2021-02-25 | Q Networks, Llc | Methods, systems, kits and apparatuses for providing end-to-end, secured and dedicated fifth generation telecommunication |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10666525B2 (en) | Distributed multi-data source performance management | |
| US10747592B2 (en) | Router management by an event stream processing cluster manager | |
| US10931540B2 (en) | Continuous data sensing of functional states of networked computing devices to determine efficiency metrics for servicing electronic messages asynchronously | |
| Picoreti et al. | Multilevel observability in cloud orchestration | |
| US10423469B2 (en) | Router management by an event stream processing cluster manager | |
| US10116534B2 (en) | Systems and methods for WebSphere MQ performance metrics analysis | |
| US11487588B2 (en) | Auto-sizing for stream processing applications | |
| CN114756301B (en) | Log processing method, device and system | |
| CN113312242B (en) | Interface information management method, device, equipment and storage medium | |
| CN115514618A (en) | Alarm event processing method and device, electronic equipment and medium | |
| CN120780488A (en) | Collaborative method and system for enhancing interoperability of AI (advanced technology attachment) agents | |
| WO2025017579A1 (en) | Method and system for unified data ingestion in a network performance management system | |
| WO2025017649A1 (en) | Method and system for monitoring performance of network elements | |
| WO2025046609A1 (en) | METHOD AND SYSTEM FOR ANALYSIS OF KEY PERFORMANCE INDICATORS (KPIs) | |
| US11403313B2 (en) | Dynamic visualization of application and infrastructure components with layers | |
| CN119961231A (en) | A dynamic log collection method and system | |
| WO2025017726A1 (en) | Method and system for creating a network area | |
| WO2025041165A1 (en) | Method and system to automatically assign restricted data to a user | |
| WO2025017578A1 (en) | Method and system of providing a unified data normalizer within a network performance management system | |
| WO2025017640A1 (en) | Method and system for real-time analysis of key performance indicators (kpis) deviations | |
| WO2025017646A1 (en) | Method and system for optimal allocation of resources for executing kpi requests | |
| WO2025017729A1 (en) | Method and system for an automatic root cause analysis of an anomaly in a network | |
| WO2025027653A1 (en) | Method and system for automatically detecting a new network node associated with a network | |
| WO2025017645A1 (en) | Method and system for performing real-time analysis of kpis to monitor performance of network | |
| WO2025022439A1 (en) | Method and system for generation of interconneted dashboards |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24842717 Country of ref document: EP Kind code of ref document: A1 |