CN119201399A - Task data configuration method, device, electronic device and storage medium - Google Patents
Task data configuration method, device, electronic device and storage medium Download PDFInfo
- Publication number
- CN119201399A CN119201399A CN202411362702.4A CN202411362702A CN119201399A CN 119201399 A CN119201399 A CN 119201399A CN 202411362702 A CN202411362702 A CN 202411362702A CN 119201399 A CN119201399 A CN 119201399A
- Authority
- CN
- China
- Prior art keywords
- data
- task
- configuration
- node
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/3006—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3051—Monitoring arrangements for monitoring the configuration of the computing system or of the computing system component, e.g. monitoring the presence of processing resources, peripherals, I/O links, software programs
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Quality & Reliability (AREA)
- Mathematical Physics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to the field of data request processing, and discloses a task data configuration method which comprises the steps of determining N types of data to be configured according to key fields of a data task to be processed, generating N configuration tasks according to the N types of data, calculating load values of service nodes according to operation index data, screening a plurality of service nodes with the load values smaller than a preset threshold value, dividing the service nodes into N node groups, associating each node group with each configuration task one by one, constructing a storage file for storing the configuration data for each configuration task in a preset database, creating a transmission channel between each storage file and the corresponding node group, downloading the configuration data from the corresponding node group into the corresponding storage file by utilizing the transmission channel, calculating each configuration data, and writing calculation results into the data task to be processed. The invention decomposes and distributes the tasks to different node groups for processing, thereby improving the processing speed and the stability of data configuration.
Description
Technical Field
The present invention relates to the field of data request processing, and in particular, to a task data configuration method, device, electronic device, and storage medium.
Background
In the field of data configuration processing, data acquisition and processing efficiency is critical to real-time performance and resource utilization.
Particularly in the security finance scenario, when a data table (data task) related to a plurality of security products needs to perform data configuration on each security product, the conventional method generally depends on sequentially sending data configuration requests to each service node according to the list order of the security products, and the serial request mode easily involves factors such as network delay, and thus the data loading speed is slow.
In addition, in the existing technical scheme, the collected configuration data is often stored in a cache database, and the calculation processing is performed on the whole configuration data after the whole configuration data is required to be downloaded, and the overload risk can be faced due to the limited capacity of the cache database, so that the stability of the data configuration processing is affected.
Therefore, how to increase the processing speed and stability of data configuration is a technical problem to be solved.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a task data configuration method, which aims to improve the processing speed and stability of data configuration by converting a request for data configuration into a configuration task that can be processed in parallel and intelligently distributing the configuration task according to the current load situation of each service node.
The task data configuration method provided by the invention comprises the following steps:
Receiving a request for configuring data of a data task to be processed, identifying key fields of the data task to be processed, determining N types of data to be configured of the data task to be processed according to N key fields obtained by identification, and generating N configuration tasks of the data task to be processed according to the N types of data, wherein the key fields comprise configuration data names or configuration data IDs;
Acquiring operation index data of each service node in a preset server cluster, calculating a load value of each service node according to the operation index data, screening a plurality of service nodes with load values smaller than a preset threshold value, dividing the service nodes into N node groups, and associating each node group with each configuration task in a one-to-one correspondence manner;
Constructing a storage document for storing configuration data for each configuration task in a preset database, and creating a transmission channel between each storage document and a corresponding node group;
and downloading configuration data from the corresponding node group to the corresponding storage document by utilizing the transmission channel corresponding to each configuration task, performing calculation processing on the configuration data of each storage document, and writing the calculation result into the data task to be processed.
Optionally, the identifying the key field of the data task to be processed includes:
The identifying the key field of the data task to be processed comprises the following steps:
dividing the data task to be processed into a plurality of data segments according to the chapter title names of the data task to be processed, wherein each data segment comprises at least one chapter title name and a segment of text content under the at least one chapter title name;
And reading out the text containing the preset key fields from each data segment, and identifying the preset key fields from the read text.
Optionally, the obtaining operation index data of each service node in the preset server cluster includes:
acquiring a node monitoring log from a preset monitoring tool in the preset server cluster, wherein the node monitoring log records operation index data of each service node;
and extracting the operation index data of each service node from the node monitoring log according to the identifier of each service node.
Optionally, the operation index data includes a CPU usage rate, a memory usage rate, a disk I/O read/write speed, a network bandwidth usage rate, a number of tasks not performed, and an operation speed of the tasks performed, and the calculating, according to the operation index data, a load value of each service node includes:
The first operation index data and the second operation index data are selected at will from the operation index data and are respectively substituted into a preset load value formula to be calculated, so that the load value of each service node is obtained;
Wherein the load value formula is L t=W1×At+W2×Bi, where W 1 and W 2 are given weight factors, a i is the first operation index data of the ith service node, and B i is the second operation index data of the ith service node.
Optionally, the calculating the load value of each service node according to the operation index data includes:
preprocessing the operation index data, wherein the preprocessing comprises removing invalid or error data;
converting the preprocessed operation index data into characteristic values, collecting characteristic values corresponding to all operation indexes to obtain corresponding characteristic value arrays, calculating the characteristic value arrays by using a trained load prediction model, distributing preset weights to each characteristic value in the characteristic value arrays, and carrying out weighted summation on the characteristic value arrays distributed with the weights to obtain the load value of each service node.
Optionally, the screening the plurality of service nodes with load values smaller than a preset threshold is divided into N node groups, including:
reading the load value of each service node, and screening a plurality of service nodes with load values smaller than a preset threshold value;
and according to the N configuration tasks, dividing the plurality of service nodes into N corresponding node groups on average.
Optionally, the creating a transmission channel between each stored document and the corresponding node group includes:
Based on a GET mode of a preset transmission protocol, configuring an independent GET request for each storage document;
and sending the GET request to a corresponding node group for communication connection to obtain a transmission channel.
In order to solve the above problems, the present invention also provides a task data configuration device, including:
The device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a request for configuring data of a data task to be processed, identifying key fields of the data task to be processed, determining N types of data required to be configured for the data task to be processed according to N key fields obtained by identification, and generating N configuration tasks of the data task to be processed according to the N types of data, wherein the key fields comprise configuration data names or configuration data IDs;
the computing module is used for acquiring operation index data of each service node in a preset server cluster, computing the load value of each service node according to the operation index data, screening a plurality of service nodes with the load value smaller than a preset threshold value, dividing the service nodes into N node groups, and associating each node group with each configuration task in a one-to-one correspondence manner;
The construction module is used for constructing a storage document for storing configuration data for each configuration task in a preset database, and creating a transmission channel between each storage document and a corresponding node group;
And the writing module is used for downloading the configuration data from the corresponding node group to the corresponding storage document by utilizing the transmission channel corresponding to each configuration task, performing calculation processing on the configuration data of each storage document, and writing the calculation result into the data task to be processed.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
At least one processor, and
A memory communicatively coupled to the at least one processor, wherein,
The memory stores a task data configuration program executable by the at least one processor, the task data configuration program being executable by the at least one processor to enable the at least one processor to perform the task data configuration method described above.
In order to solve the above-described problems, the present invention also provides a computer-readable storage medium having stored thereon a task data configuration program executable by one or more processors to implement the task data configuration method described above.
Compared with the prior art, the method divides the data task to be processed into a plurality of configuration tasks according to the key fields for identifying the data task to be processed, and the problems of network delay caused by the traditional serial request mode are avoided by decomposing and processing the tasks in parallel, so that the data loading and processing speed is greatly improved.
Through monitoring operation index data of each service node in real time, screening a plurality of service nodes with load values smaller than a preset threshold value, dividing the service nodes into N node groups, associating each node group with each configuration task in a one-to-one correspondence manner, and through a load balancing algorithm, partial node overload is avoided, and stable operation of the downloading task is ensured.
Creating independent storage documents for each configuration task, creating a transmission channel between each storage document and a corresponding node group, downloading configuration data from the corresponding node group to the corresponding storage document by utilizing the transmission channel corresponding to each configuration task, performing calculation processing on the configuration data of each storage document, and writing calculation results into the data task to be processed. By means of downloading and processing at the same time, dependence on a cache database is reduced, overload risks caused by limited capacity of the cache database are avoided, and tasks are decomposed and distributed to different node groups for processing, so that the processing speed and stability of data configuration are improved.
Drawings
FIG. 1 is a flow chart of a task data configuration method according to an embodiment of the present invention;
FIG. 2 is a schematic block diagram of a task data configuration device according to an embodiment of the present invention;
Fig. 3 is a schematic structural diagram of an electronic device for implementing a task data configuration method according to an embodiment of the present invention;
the achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the description of "first", "second", etc. in this disclosure is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implying an indication of the number of technical features being indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
Referring to fig. 1, a flow chart of a task data configuration method according to an embodiment of the invention is shown. The method is performed by an electronic device.
In this embodiment, the task data configuration method includes:
S1, receiving a request for configuring data of a data task to be processed, identifying key fields of the data task to be processed, determining N types of data to be configured of the data task to be processed according to N key fields obtained through identification, and generating N configuration tasks of the data task to be processed according to the N types of data, wherein the key fields comprise configuration data names or configuration data IDs.
In this embodiment, a request for configuring data for a data task to be processed by a user or an application program is received, and after the request is received, the data task to be processed needs to be analyzed, and key fields of the data task to be processed are identified from the data task to be processed.
The data tasks to be processed refer to one or a group of tasks or data tables that require data configuration or updating. For example, in the insurance finance scenario, the salesman uploads a data task to be processed, the data task to be processed is an insurance policy to be associated, the insurance policy to be associated only includes insurance products of different types, but various association tables storing detailed information (such as premium calculation rules, claim settlement conditions, product terms and the like) of the insurance products are distributed and stored in each service node in a preset server cluster, so that configuration data of the insurance policy to be associated needs to be obtained from various association tables of each service node, and various types of insurance products in the insurance policy to be associated are processed by utilizing the insurance policy to be associated, such as detailed information supplement, checking information or calculating prices of the insurance products.
The key field of the data task to be processed is used to distinguish between different data items or data types in the data task to be processed, e.g. the key field may be an ID or name of the insurance product.
And determining how many different data items or data types are contained in the data task to be processed according to the identified key field, for example, the identified key field comprises names of car insurance, personal accident insurance and property insurance, and the data task to be processed needs to be configured with the data items or data types such as the car insurance, the personal accident insurance and the property insurance.
And determining N types of data to be configured for the task of the data to be processed according to the identified key field, and generating an independent configuration task for each type of data. This means that if there are N different key fields, N configuration tasks will result.
Compared with the traditional method, the serial request method generally relies on sequentially sending data configuration requests to each service node according to the insurance product list sequence, and the serial request method easily involves factors such as network delay and the like, so that the data loading speed is slow. According to the invention, a large data configuration task is decomposed into a plurality of small tasks according to the data type, so that the tasks can be processed on a plurality of service nodes in parallel, and the data processing speed can be improved.
In one embodiment, the identifying key fields of the data task to be processed includes:
dividing the data task to be processed into a plurality of data segments according to the chapter title names of the data task to be processed, wherein each data segment comprises at least one chapter title name and a segment of text content under the at least one chapter title name;
And reading out the text containing the preset key fields from each data segment, and identifying the preset key fields from the read text.
Dividing the content of the data task to be processed into a plurality of data segments according to the chapter title names in the data task to be processed, screening out texts containing preset key fields (such as configuration data names or configuration data IDs) from each data segment, and identifying the preset key fields from the read texts.
Assuming that the preset key fields include "insurance product ID" and "product name", the text containing these data segments needs to be screened out from the parsed paragraphs. For example, the key field in data segment 1 is insurance product ID 001, product name is car insurance. The key field in the data section 2 is insurance product ID 002, product name personal accident risk. Both data segment 1 and data segment 2 contain both fields and are therefore reserved.
By analyzing the chapter title names in the data task and splitting the content into a plurality of texts, which parts need to be configured can be more accurately identified, specific fields needing to be updated can be more accurately positioned, and unnecessary data processing work is avoided.
S2, acquiring operation index data of each service node in a preset server cluster, calculating a load value of each service node according to the operation index data, screening a plurality of service nodes with load values smaller than a preset threshold value, dividing the service nodes into N node groups, and associating each node group with each configuration task in a one-to-one correspondence manner.
In this embodiment, the node monitoring log is obtained from a preset monitoring tool in the preset server cluster, the preset monitoring tool may be a promethaus monitoring alarm system, the node monitoring log records operation index data of each service node at the current moment, and the operation index data of each service node is obtained in real time through the monitoring tool (such as promethaus), so that the current state of each service node can be accurately known, configuration tasks can be intelligently allocated according to the actual load condition of the service node, and the condition that some nodes are overloaded and other nodes are idle is avoided.
According to identifiers of all service nodes, operation index data of all the service nodes are extracted from the node monitoring log, a plurality of service nodes with load values smaller than a preset threshold (for example, the preset threshold is 0.75) are screened and divided into N node groups, resources in a cluster can be utilized more effectively, the service nodes in each node group can process tasks allocated to the service nodes efficiently, better load balancing can be achieved, overload of single nodes can be avoided, available resources in the cluster can be fully utilized, and the resource utilization rate of the whole system is improved.
By means of one-to-one correspondence between the configuration tasks and the node groups, each configuration task can be ensured to be processed by a special node group, each node group only needs to pay attention to the part of the configuration tasks responsible for the node group, interference among the tasks is avoided, and accuracy and efficiency of task processing are improved.
In one embodiment, the screening the plurality of service nodes with load values smaller than the preset threshold value and dividing the plurality of service nodes into N node groups includes:
reading the load value of each service node, and screening a plurality of service nodes with load values smaller than a preset threshold value;
and according to the N configuration tasks, dividing the plurality of service nodes into N corresponding node groups on average.
All the service nodes with the load values smaller than the preset threshold value are equally divided into N node groups, and the load levels of all the service nodes can be ensured to be approximately the same to a certain extent through average distribution without complex calculation or additional resource distribution logic, so that the problem of single-point overload is avoided.
In other implementations, the plurality of service nodes with filtering load values smaller than the preset threshold may be further divided into N node groups by:
In the first mode, a priority is allocated to a plurality of service nodes with load values smaller than a preset threshold, for example, the service nodes with load values of 0.10-0.29 are allocated with high priority, and the service nodes with load values of 0.50-0.69 are allocated with low priority in the service node allocation priorities of 0.30-0.49, the service nodes with high priority are firstly divided into N node groups, if the number of the service nodes with high priority is not divided enough, the service nodes in the priority are called, the nodes with high priority are usually nodes with better performance, and the tasks can be completed more quickly, so that the overall system performance is improved, the priority can be dynamically adjusted according to the change of the load, and the system is more flexibly adapted to different workloads.
And screening M service nodes with load values smaller than a preset threshold, dividing the M service nodes into N node groups, monitoring the load conditions of the M service nodes in real time, and dynamically adjusting a task allocation strategy according to actual conditions. For example, if the load of a certain node suddenly increases, the task amount can be immediately reduced, and the task can be transferred to other nodes with lower loads. By continuously monitoring the node state, the task allocation can be adjusted in real time, so that the system load is always maintained in an optimal state, overload or fault nodes can be rapidly found and processed, tasks are transferred in time, and the fault tolerance capability and usability of the system are improved.
The selection of which mode is not limited in the invention, at least which mode is selected depends on specific requirements of actual application scenes, such as factors of system on performance, load fluctuation, importance degree of stability and the like.
In one embodiment, the obtaining operation index data of each service node in the preset server cluster includes:
acquiring a node monitoring log from a preset monitoring tool in the preset server cluster, wherein the node monitoring log records operation index data of each service node;
and extracting the operation index data of each service node from the node monitoring log according to the identifier of each service node.
The whole server cluster is monitored by a preset monitoring tool (such as a Prometheus monitoring alarm system), and the operation index data of each service node is captured by the preset monitoring tool, wherein the operation index data comprises, but is not limited to, CPU (Central processing Unit) utilization rate, memory utilization rate, disk I/O (input/output) read-write speed, network bandwidth utilization rate, number of non-executed tasks and operation speed of executed tasks.
The preset monitoring tool periodically collects the operation index data from each service node, stores the operation index data in a time sequence database of the preset monitoring tool, and forms a node monitoring log through the operation index data, so that the operation index data of each service node is extracted from the node monitoring log according to the identifier of each service node.
By extracting the operation index data from the monitoring log, the operation condition of each service node can be monitored in real time, and the latest state information of the service node can be obtained at the first time, so that a decision can be made in time, such as whether a configuration task is distributed to a certain node or not, potential problems, such as abnormal use of node resources, increased network delay and the like, can be found in time, and measures are taken to prevent the problems from influencing the data processing process. This helps to maintain stable operation of the system.
In one embodiment, the operation index data includes a CPU usage rate, a memory usage rate, a disk I/O read/write speed, a network bandwidth usage rate, a number of tasks not executed, and an operation speed of the tasks executed, and the calculating the load value of each service node according to the operation index data includes:
The first operation index data and the second operation index data are selected at will from the operation index data and are respectively substituted into a preset load value formula to be calculated, so that the load value of each service node is obtained;
Wherein the load value formula is L i=W1×Ai+W2×Bi, where W 1 and W 2 are given weight factors, a i is the first operation index data of the ith service node, and B o is the second operation index data of the ith service node.
The operation index data of each service node is obtained from the monitoring tool, and the first operation index data and the second operation index data are selected randomly from the operation index data and are respectively substituted into a preset load value formula, so that the actual load condition of the service node can be reflected more accurately, and tasks can be distributed intelligently according to the current load condition of the service node. For example, a service node with a lower load value may prioritize new configuration tasks.
Illustrating:
there are two service nodes Node1 and Node2, and CPU utilization and memory utilization are selected as calculation indices, and w1=0.5 and w2=0.5 are assumed.
The CPU utilization of Node1 is 50% and the memory utilization is 40%.
The CPU utilization of Node2 is 70% and the memory utilization is 50%.
According to the load value formula L i=W1×Ai+W2×Bi, the following calculation can be performed:
The load value L Node1 =0.5×0.5+0.5×0.4=0.45 or 45% of Node 1.
The load value L Node2 =0.5×0.7+0.5×0.5=0.6 or 60% of Node 2.
In this case, the load of the Node1 is low, so that a new configuration task can be preferentially allocated to the Node1, and in this way, the load condition of the service Node can be more accurately estimated and managed, thereby realizing efficient resource allocation and task processing.
In one embodiment, the calculating the load value of each service node according to the operation index data includes:
preprocessing the operation index data, wherein the preprocessing comprises removing invalid or error data;
converting the preprocessed operation index data into characteristic values, collecting characteristic values corresponding to all operation indexes to obtain corresponding characteristic value arrays, calculating the characteristic value arrays by using a trained load prediction model, distributing preset weights to each characteristic value in the characteristic value arrays, and carrying out weighted summation on the characteristic value arrays distributed with the weights to obtain the load value of each service node.
The operation index data includes CPU usage, memory usage, disk I/O read/write speed, network bandwidth usage, number of tasks not executed and operation speed of tasks executed, and is preprocessed, for example, whether the operation index data has abnormal value, missing value or error value is checked and processed. If the running speed of the executed task reported by a certain service node is negative or exceeds a reasonable range, the running speed is regarded as an abnormal value and corrected or deleted. If missing data is present, interpolation, forward padding, backward padding, or other statistical methods may be used to fill in the missing values.
Converting each preprocessed operation index data into a corresponding characteristic value, wherein the method comprises the following steps of:
If the operation index data is given in the form of percentages (e.g., CPU usage, memory usage), these values may be used directly. For example, if the CPU usage is 75%, 0.75 may be directly taken as a characteristic value.
If the disk I/O read/write speed is given in bytes/second, it may be converted to a standardized form, for example, to read/write operations per second, or compared to a reference speed to derive a relative value.
If the network bandwidth usage is given in Mbps or Gbps, it can be considered to be converted into a percentage form, for example, if the maximum bandwidth is 1Gbps and the current usage is 600Mbps, the converted percentage is 60%.
If the number of tasks performed is an absolute number, it can be compared to the maximum processing capacity of the node to obtain a relative value, e.g., if the maximum processing capacity of the node is 100 tasks, 30 tasks are currently being performed, the percentage after conversion is 30%.
And collecting the characteristic values corresponding to all the operation indexes for each service node to obtain a characteristic value array. For example, for Node1, if its CPU utilization is 0.75, memory utilization is 0.4, disk I/O read/write speed is 0.6, network bandwidth utilization is 0.5, and the number of execution tasks is 0.3, the built eigenvalue array may be denoted as [0.75,0.4,0.6,0.5,0.3].
And inputting the characteristic value array of each service node into a trained load prediction model, and distributing a weight for each characteristic value in the characteristic value array. For example, a simple linear regression model is used as the load prediction model, and the load value L can be calculated by the following formula:
L=w1·CPU+w2·Memory+w3·DiskIO+w4·Bandwidth+w5·Tasks
Where w 1-w5 is the feature weight, CPU, memory, diskIO, bandwidth, tasks are feature values in the feature value array, if it is determined that the CPU usage has a greater impact on the load, then higher weights may be assigned, and other features may be assigned lower weights.
After each feature value in the feature value array is assigned with a preset weight, the feature value arrays assigned with the weights are subjected to linear combination (for example, weighted summation) or nonlinear combination (for example, through an activation function) to obtain the load value of each service node.
For example, for Node1, if the load value calculated by the load prediction model is 0.58, then Node1 may be considered currently in a medium load state. If the load value is lower, the new configuration task can be considered to be allocated to the node, and if the load value is higher, the allocation of the new task to the node is avoided, and the node with lower load is searched to process the new configuration task.
The load value of each service node is automatically calculated through the trained load prediction model, so that the requirement of manual intervention is reduced, the latest operation index data can be processed in real time, the load prediction result is provided immediately, the system can respond to load change quickly, and the system can manage resources more intelligently.
A training process for a load prediction model, comprising:
The monitoring tool is utilized to collect operation index data of the service node, including but not limited to CPU usage, memory usage, disk I/O read/write speed, network bandwidth usage, number of unexecuted tasks, operation speed of executed tasks, and the like. The operation index data is preprocessed, and the preprocessed operation index data is divided into a training set and a testing set (for example, the proportion is 70% training, 30% testing), and can be further divided into a training set, a verification set and a testing set.
A regression model is used as an initial load prediction model such as linear regression, support Vector Machines (SVMs), decision trees, random forests, gradient-lifting trees (e.g., XGBoost or LightGBM), or neural networks, etc. The initial load prediction model is trained using the training set data. In the training process, the super parameters are adjusted according to the performance of the initial load prediction model so as to optimize the model performance of the initial load prediction model. To prevent overfitting, cross-validation techniques can be used to evaluate the generalization ability of the initial load prediction model.
And adjusting parameters of the initial load prediction model according to the verification result, repeating the training and verification processes until satisfaction, obtaining a trained load prediction model, and integrating the trained load prediction model into a production environment so that the real-time data stream can be predicted.
In step S2, the operation state of the service node is monitored in real time, the load value of the node is calculated according to the operation index data, the appropriate node is screened according to the load value, the node is divided into a plurality of groups, and each group processes a configuration task. The overload of some nodes and the idle of other nodes are avoided, more reasonable resource utilization is realized, the load balancing capability of the system is improved, and the stability and the reliability of the system are enhanced.
S3, constructing a storage document for storing configuration data for each configuration task in a preset database, and creating a transmission channel between each storage document and the corresponding node group.
In this embodiment, a shell script is created in a preset database, for example, the preset database may be Hudi storage databases.
In order to improve the reading speed and the data storage space, the invention optimizes the configuration data storage and reading performance through the index and compression technology of Hudi storage databases, which is very beneficial to real-time data processing and can accelerate the reading speed of the data.
And taking the name of each configuration task as a parameter of a shell script, calling an operation command of a preset database based on the shell script of the configuration parameter, creating a corresponding storage document for each configuration task in the preset database, configuring a single GET request for each storage document based on a GET mode of a transmission protocol built in the preset database, and sending the GET request to a corresponding node group for communication connection to obtain a transmission channel.
In one embodiment, the building a storage document for storing configuration data for each configuration task includes:
creating a shell script in a preset database, and taking the name of each configuration task as a parameter of the shell script;
and calling an operation command of a preset database based on the shell script of the configuration parameters, and creating a corresponding storage document for each configuration task in the preset database.
Creating a shell script in a preset database (for example Hudi storage database), wherein the main function of the shell script is to receive the name of each configuration task, and execute the operation of creating a storage document in the database according to the name of the configuration task. Shell script is a program written in Shell language in Unix/Linux operating system. Shell is an interface between the user and the operating system that allows the user to interact with the system through a Command Line Interface (CLI).
And calling operation commands of a preset database through Shell scripts, wherein the operation commands are used for creating a storage document in the database. For each configuration task name passed to the script, the Shell script will execute a series of commands to create a stored document for that task in the preset database.
Compared with the prior art, the method has the advantages that the whole configuration data is calculated after being downloaded to the cache database, and if the configuration data is large, a long time is needed for downloading the whole data set to the cache database. Even if only a small amount of data is missing and not downloaded, the whole process flow needs to wait, resulting in prolonged starting time of the calculation process. In the process of waiting for complete downloading of data, computing resources may be in an idle state, and processing of the downloaded data cannot be started, so that resource waste is caused.
According to the invention, by constructing an independent storage document for each configuration task, the configuration data of the current storage document can be calculated and processed only by downloading the configuration data of the current storage document, and the whole configuration data is not required to be downloaded, so that a plurality of tasks can be processed in parallel without mutual interference, and the speed and the efficiency of data processing can be remarkably improved.
Illustrating:
In the prior art, the processing can be started only after the configuration data of all three products are completely downloaded to the cache database. If the data volume of the vehicle insurance is the largest and the data volume of the health insurance is the smallest, the system must wait for all data to be downloaded before processing can be started even if the data of the health insurance is already downloaded.
The method comprises the step of creating an independent storage file for each configuration task (car risk, personal accident risk and health risk). When the download of the configuration data of the health risk is completed, the system immediately starts to process the data of the health risk. Likewise, after the downloading of the personal accident risk data is completed, the processing of the personal accident risk data is started. Finally, after the data of the vehicle insurance is downloaded, the data of the vehicle insurance is processed again.
In this way, even if some data is not downloaded, the downloaded data can be processed immediately, and the processing speed and efficiency are improved remarkably. Meanwhile, each task is independently processed, so that mutual interference is avoided, the system can better utilize computing resources, waiting time is reduced, and overall performance is improved.
In one embodiment, the creating a transmission channel between each stored document and the corresponding node group includes:
Based on a GET mode of a preset transmission protocol, configuring an independent GET request for each storage document;
and sending the GET request to a corresponding node group for communication connection to obtain a transmission channel.
A unique URI (UniformResourceIdentifier) is constructed for each stored document. The URI typically contains a path, table name, partition information, etc. to store the document to facilitate locating the particular document. Based on a GET mode of a built-in transmission protocol of a preset database, configuring a single GET request for each storage document,
Constructing the necessary HTTP header information for the GET request, for example, the header information includes authentication information, content type, etc., adding URL to the GET request, sending the GET request to the corresponding node group at the HTTP client library, and the node group receiving Hudi stores the GET request sent by the database, in fact, creating a temporary transmission channel, and returning configuration data according to the response format (typically JSON or Parquet format, etc.) of the GET request.
By dispersing the requests to a plurality of service nodes, the whole data transmission process is not affected even if a certain service node has a problem, and the stability and the reliability of the system are improved. If the load of a certain service node suddenly increases, dynamic adjustment can be performed by reallocating the GET request to other service nodes with lower loads, so that smoothness of data transmission is ensured.
In step S3, a separate storage document is created for each configuration task in the database, a transmission channel is established, and configuration data is downloaded from the designated node group into the storage document by a GET request. The problem that the processing can be started after the whole data set is downloaded in the traditional mode is solved, the processing is realized while the downloading is carried out, the processing efficiency is improved, the pressure of a cache database is reduced, and the overload risk caused by the limited capacity of the cache database is avoided.
S4, downloading configuration data from the corresponding node groups to the corresponding storage documents by utilizing the corresponding transmission channels of each configuration task, performing calculation processing on the configuration data of each storage document, and writing calculation results into the data tasks to be processed.
In this embodiment, after determining the transmission channel corresponding to each configuration task, each configuration task has its own independent transmission channel, so as to ensure independence and high efficiency of data transmission.
And downloading the configuration data from the corresponding node group to the corresponding storage document through the GET request, thereby realizing parallel receiving of the configuration data.
And analyzing the configuration data by using a preset analysis tool (the analysis tool can be a corresponding library or tool, such as a json module of Python), and calculating and processing the analyzed configuration data according to the preset requirement of each configuration task to obtain a calculation result, wherein the calculation processing comprises data verification, data conversion and aggregation operation. In particular, data verification to ensure that the configuration data is properly formatted, for example, to check whether the configuration data contains the necessary fields, such as insurance product ID, name, etc. It is verified whether the content of the configuration data meets the expectations, for example, whether the premium is a positive number, whether the claims condition is reasonable, and the like.
Data conversion in order to convert the configuration data into a format suitable for further processing. For example, numbers of the string type are converted to integers or floating point numbers. If there are different units of numerical values in the configuration data, they need to be converted into a unified unit for comparison or calculation.
The aggregation operation gathers configuration data according to the requirements of the configuration task, for example, calculates the average premium of a certain class of insurance products. Or combining configuration data from different sources to form complete configuration information. For example, premium information and claims are combined for comprehensive analysis.
And writing the calculation result into a data task to be processed, wherein the data task to be processed is an insurance policy to be associated, and if detailed information of various insurance products in the insurance policy to be associated is required to be supplemented, the obtained calculation result is the detailed information of the various insurance products, so that the detailed information of the various insurance products is supplemented into the insurance policy to be associated.
If the information of various insurance products in the insurance policy to be associated is required to be checked to be correct, the obtained calculation result is the checking result of the information of the various insurance products, so that the checking result of the various insurance products is supplemented to the insurance policy to be associated.
If the price of each insurance product in the insurance policy to be associated is required to be calculated, the obtained calculation result is the price of each insurance product, so that the price of each insurance product is supplemented to the insurance policy to be associated.
In step S4, the configuration data is downloaded to the storage documents by using the transmission channel, the configuration data in each storage document is calculated, and the processing result is written back to the task of the data to be processed. The data processing speed is improved by processing a plurality of configuration tasks in parallel, and the accuracy and consistency of the data are ensured and the processing quality is improved by data verification, conversion and aggregation operation.
In summary, steps S1 to S4 act together to improve the speed and stability of data configuration processing, solve the problems of network delay, slow data loading speed and the like caused by the traditional serial request mode, solve the problem of uneven resource utilization through a dynamic load balancing mechanism, and further improve the processing capacity and stability of the system through parallel processing and optimized data storage and transmission mechanisms. These improvements are of great importance for handling large, complex data configuration tasks, especially in insurance financial scenarios where high real-time and resource utilization are required.
Fig. 2 is a schematic block diagram of a task data configuration device according to an embodiment of the invention.
The task data configuration device 100 according to the present invention may be installed in an electronic apparatus. The task data configuration device 100 may include a receiving module 110, a calculating module 120, a constructing module 130, and a writing module 140 according to the implemented functions. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
In the present embodiment, the functions concerning the respective modules/units are as follows:
A receiving module 110, configured to receive a request for configuring data of a data task to be processed, identify key fields of the data task to be processed, determine N types of data to be configured for the data task to be processed according to N key fields obtained by identification, and generate N configuration tasks of the data task to be processed according to the N types of data, where the key fields include a configuration data name or a configuration data ID;
The computing module 120 is configured to obtain operation index data of each service node in a preset server cluster, calculate a load value of each service node according to the operation index data, screen a plurality of service nodes with load values smaller than a preset threshold, divide the service nodes into N node groups, and associate each node group with each configuration task in a one-to-one correspondence manner;
a construction module 130, configured to construct a storage document for storing configuration data for each configuration task in a preset database, and create a transmission channel between each storage document and a corresponding node group;
The writing module 140 is configured to download the configuration data from the corresponding node group to the corresponding storage document by using the transmission channel corresponding to each configuration task, perform calculation processing on the configuration data of each storage document, and write the calculation result into the data task to be processed.
Fig. 3 is a schematic structural diagram of an electronic device for implementing a task data configuration method according to an embodiment of the present invention.
In the present embodiment, the electronic device 1 includes, but is not limited to, a memory 11, a processor 12, and a network interface 13, which are communicably connected to each other via a system bus, and the memory 11 stores therein a task data configuration program 10, the task data configuration program 10 being executable by the processor 12. Fig. 3 shows only the electronic device 1 with the components 11-13 and the task data configuration program 10, it being understood by those skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1 and may include fewer or more components than shown, or may combine certain components, or a different arrangement of components.
Wherein the storage 11 comprises a memory and at least one type of readable storage medium. The memory provides a buffer for operation of the electronic device 1, and the readable storage medium may be a non-volatile storage medium such as flash memory, a hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments the readable storage medium may be an internal storage unit of the electronic device 1, in other embodiments the non-volatile storage medium may also be an external storage device of the electronic device 1, such as a plug-in hard disk, a smart memory card (SMARTMEDIACARD, SMC), a secure digital (SecureDigital, SD) card, a flash memory card (FLASHCARD) or the like provided on the electronic device 1. In this embodiment, the readable storage medium of the memory 11 is generally used to store an operating system and various application software installed in the electronic device 1, for example, to store codes of the task data configuration program 10 in one embodiment of the present invention. Further, the memory 11 may be used to temporarily store various types of data that have been output or are to be output.
Processor 12 may be a central processing unit (CentralProcessingUnit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chip in some embodiments. The processor 12 is typically used to control the overall operation of the electronic device 1, such as performing control and processing related to data interaction or communication with other devices, etc. In this embodiment, the processor 12 is configured to execute the program code or process data stored in the memory 11, for example, execute the task data configuration program 10.
The network interface 13 may comprise a wireless network interface or a wired network interface, the network interface 13 being used for establishing a communication connection between the electronic device 1 and a terminal (not shown).
Optionally, the electronic device 1 may further comprise a user interface, which may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (organic light-emitting diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The task data configuration program 10 stored in the memory 11 in the electronic device 1 is a combination of instructions that, when executed in the processor 12, may implement:
Receiving a request for configuring data of a data task to be processed, identifying key fields of the data task to be processed, determining N types of data to be configured of the data task to be processed according to N key fields obtained by identification, and generating N configuration tasks of the data task to be processed according to the N types of data, wherein the key fields comprise configuration data names or configuration data IDs;
Acquiring operation index data of each service node in a preset server cluster, calculating a load value of each service node according to the operation index data, screening a plurality of service nodes with load values smaller than a preset threshold value, dividing the service nodes into N node groups, and associating each node group with each configuration task in a one-to-one correspondence manner;
Constructing a storage document for storing configuration data for each configuration task in a preset database, and creating a transmission channel between each storage document and a corresponding node group;
and downloading configuration data from the corresponding node group to the corresponding storage document by utilizing the transmission channel corresponding to each configuration task, performing calculation processing on the configuration data of each storage document, and writing the calculation result into the data task to be processed.
In particular, the specific implementation method of the task data configuration program 10 by the processor 12 may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
Further, the modules/units integrated in the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. The computer readable medium may be nonvolatile or nonvolatile. The computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a Read-only memory (ROM).
The computer readable storage medium stores a task data configuration program 10, where the task data configuration program 10 may be executed by one or more processors, and the specific implementation of the computer readable storage medium is substantially the same as the above embodiments of the task data configuration method, and will not be described herein.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411362702.4A CN119201399B (en) | 2024-09-27 | 2024-09-27 | Task data configuration method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411362702.4A CN119201399B (en) | 2024-09-27 | 2024-09-27 | Task data configuration method and device, electronic equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119201399A true CN119201399A (en) | 2024-12-27 |
| CN119201399B CN119201399B (en) | 2025-09-16 |
Family
ID=94063548
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411362702.4A Active CN119201399B (en) | 2024-09-27 | 2024-09-27 | Task data configuration method and device, electronic equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119201399B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119849863A (en) * | 2025-01-09 | 2025-04-18 | 北京都有科技有限公司 | Order allocation processing method and system based on decision tree algorithm |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103347055A (en) * | 2013-06-19 | 2013-10-09 | 北京奇虎科技有限公司 | System, device and method for processing tasks in cloud computing platform |
| CN110955524A (en) * | 2019-11-27 | 2020-04-03 | 北京网聘咨询有限公司 | Optimized scheduling method for server |
| CN113656183A (en) * | 2021-08-31 | 2021-11-16 | 平安医疗健康管理股份有限公司 | Task processing method, device, equipment and storage medium |
| CN114691873A (en) * | 2022-04-08 | 2022-07-01 | 广州文远知行科技有限公司 | Semantic processing method and device for automatic driving log data and storage medium |
| US20230328336A1 (en) * | 2022-04-07 | 2023-10-12 | Lemon Inc. | Processing method and apparatus, electronic device and medium |
-
2024
- 2024-09-27 CN CN202411362702.4A patent/CN119201399B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103347055A (en) * | 2013-06-19 | 2013-10-09 | 北京奇虎科技有限公司 | System, device and method for processing tasks in cloud computing platform |
| CN110955524A (en) * | 2019-11-27 | 2020-04-03 | 北京网聘咨询有限公司 | Optimized scheduling method for server |
| CN113656183A (en) * | 2021-08-31 | 2021-11-16 | 平安医疗健康管理股份有限公司 | Task processing method, device, equipment and storage medium |
| US20230328336A1 (en) * | 2022-04-07 | 2023-10-12 | Lemon Inc. | Processing method and apparatus, electronic device and medium |
| CN114691873A (en) * | 2022-04-08 | 2022-07-01 | 广州文远知行科技有限公司 | Semantic processing method and device for automatic driving log data and storage medium |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119849863A (en) * | 2025-01-09 | 2025-04-18 | 北京都有科技有限公司 | Order allocation processing method and system based on decision tree algorithm |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119201399B (en) | 2025-09-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108776934B (en) | Distributed data calculation method and device, computer equipment and readable storage medium | |
| US11609911B2 (en) | Selecting a normalized form for conversion of a query expression | |
| US20100153431A1 (en) | Alert triggered statistics collections | |
| CN113312376B (en) | Method and terminal for real-time processing and analysis of Nginx logs | |
| CN109981702B (en) | File storage method and system | |
| CN111045911B (en) | Performance test method, performance test device, storage medium and electronic equipment | |
| US20170048120A1 (en) | Systems and Methods for WebSphere MQ Performance Metrics Analysis | |
| US9965327B2 (en) | Dynamically scalable data collection and analysis for target device | |
| CN112527599A (en) | Intelligent monitoring method and device, electronic equipment and readable storage medium | |
| CN112052082B (en) | Task attribute optimization method, device, server and storage medium | |
| CN103077197A (en) | Data storing method and device | |
| CN113051060B (en) | A GPU dynamic scheduling method, apparatus, and electronic device based on real-time load. | |
| CN114356712B (en) | Data processing method, apparatus, device, readable storage medium, and program product | |
| US11816511B1 (en) | Virtual partitioning of a shared message bus | |
| CN113377866B (en) | A load balancing method and device for virtualized database proxy service | |
| CN108228322B (en) | Distributed link tracking and analyzing method, server and global scheduler | |
| CN119201399B (en) | Task data configuration method and device, electronic equipment and storage medium | |
| CN119759553A (en) | Load balancing method, device, electronic equipment and computer program product | |
| CN111984505A (en) | Operation and maintenance data acquisition engine and acquisition method | |
| CN113434278A (en) | Data aggregation system, method, electronic device, and storage medium | |
| CN116975109A (en) | A data quality detection method and device | |
| US20080065588A1 (en) | Selectively Logging Query Data Based On Cost | |
| US12468574B2 (en) | Systems and methods for dynamically scaling remote resources | |
| CN120631870A (en) | Project information collection method and system based on plug-in technology | |
| JP5043166B2 (en) | Computer system, data search method, and database management computer |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |