Disclosure of Invention
In order to solve the problems in the prior art, the application aims to provide a cloud computing-based dynamic collaboration and deployment method for an enterprise digital tool chain.
The application discloses a cloud computing-based dynamic collaboration and deployment method for an enterprise digital tool chain, which comprises the following steps:
S101, monitoring version change conditions of core tools in a tool chain, acquiring newly added data fields and abandoned interface information after upgrading, judging differences of interface protocols before and after upgrading through a semantic analysis method, adjusting data mapping relations according to the differences, obtaining adjustment contents, and generating a data mapping relation change list;
S102, analyzing the change content of interface protocol and data field by adopting a rule-based reasoning method according to the data mapping relation change list in the step S101, and generating data mapping scripts among different versions of tools according to the pre-configured interface protocol adaptation rule;
S103, under the condition that the tool version is updated to cause the interface protocol in the step S102 to be changed, adjusting the data mapping relation, in the adjustment process, carrying out similarity analysis on the newly added field and the existing field, deducing the relevance between the newly added field and the existing field, incorporating the newly added field into a data field matcher, constructing the data mapping relation, and deleting the abandoned interface;
S104, after deleting the abandoned interfaces in the step S103, adopting a shortest path search algorithm based on a graph to bypass the deleted abandoned interfaces, re-planning a data transmission path among tools, and re-forming a tool chain data transmission path;
S105, collecting data processing time delay and resource consumption of tool nodes in the reformed tool chain data transmission path in the step S104, analyzing average processing time delay and throughput before and after interface protocol change, judging that network delay and broadband limitation exist, and if so, acquiring data fragment size and transmission batch parameters in the data processing process;
S106, optimizing the circulation efficiency of the data in the tool chain according to the size of the data fragments and the transmission batch parameters in the step S105, and dynamically adjusting the execution sequence and concurrency of each tool in the tool chain according to the optimized data circulation scheme.
Preferably, in the step S101, a field change set is extracted from a version management warehouse according to a patch package serial number, a field change index table is established, the name and the type of a newly added field in the index table are identified through regular matching, a feature vector calculator is utilized to obtain a semantic feature value, a field pair with similarity larger than a threshold value is selected, the type compatibility of the field is judged through a data type verifier, a data migration rule is generated through a data conversion compensator based on the type compatibility, and data consistency verification is performed through a rule verifier, so that the reliability level of the mapping rule is marked, and a mapping relation list with a reliability level mark is generated;
And checking the mapping rule with the reliability level lower than the threshold value in the mapping relation list with the reliability level mark, updating the reliability level of the mapping rule according to the checking result, and generating a data mapping change list.
Preferably, in the step S102, the rule parser is used to read the interface change rule set from the data mapping relation change list, and the field mapping relation is parsed by the rule interpreter to construct a rule grammar tree;
Generating an initial rule relation graph based on the rule grammar tree, and verifying the integrity of node connection through an integrity checker;
and selecting a matching rule from a rule template library to complement the mapping relation among the rule nodes according to the verification result of the integrity checker, obtaining a complemented rule relation diagram, converting the complemented rule relation diagram into a data processing instruction sequence by using a mapping template generator, and verifying the correctness of data conversion by using a data sample tester to obtain a data mapping script.
Preferably, in the step S103, the new and old version interface definition files are read by the protocol change parser, the field names and types are extracted, the field feature vectors are generated by using the text vectorization tool, and the cosine similarity between the newly added field and the existing field is calculated;
If the similarity exceeds the threshold value, generating a field mapping rule set;
if the similarity does not exceed the threshold value, a field mapping rule set is not generated;
And acquiring a newly added field data sample by using a data sampler, calculating a data distribution feature vector to judge the accuracy of a field mapping rule, identifying the dependency relationship of the abandoned interface by calling a link analyzer, generating a cleaning sequence table by using a topology sequencing algorithm, and confirming the disabling state of the cleaning interface by calling a verifier.
Preferably, in the step S104, a node detector sends a probe request to a node in the tool link, and the topology structure of the tool link is obtained according to the comparison between the response time of the node and a preset response time threshold;
calculating data flow paths among the nodes by using a connectivity detector, and determining weighted shortest paths for the transmission rates and the processing rates of the paths;
The path scoring device calculates the scores of the transmission delay, the packet loss rate and the load balancing degree according to the weighted shortest paths, and judges the optimal transmission path according to the weights of the scores;
And the configuration issuing device updates a routing configuration table of the node according to the optimal transmission path, meanwhile, the performance collector is responsible for collecting transmission performance indexes, and when the presence of abnormal performance indexes is detected, the node is switched to an alternative path.
Preferably, in the step S105, processing delay data and resource consumption data are obtained from a tool chain node by using a performance collector, and a data normalization processor is utilized to obtain performance index data with uniform scale according to the processing delay data and the resource consumption data, where the performance index data includes a processing delay index and a resource consumption index;
transmitting a detection packet through a network detector to acquire network transmission delay data;
And aiming at the network transmission delay data, obtaining the optimal fragment scale parameter by using a data fragment calculator.
Preferably, in the step S106, node resource monitoring data is obtained, and the occupancy rates of the node processor and the memory are counted by using a resource load calculator to obtain a resource load value;
Then, the data processor calculates the processing rate and the transmission rate of the node according to the resource load value, and obtains a transmission sample of a corresponding tool chain through a data sampler;
generating a test data set by using the transmission sample, and recording the backlog quantity of the data to obtain backlog state data;
The dependency verifier checks the dependency relationship of the tool nodes according to the backlog state data, and calculates a priority value to obtain a tool execution sequence;
and acquiring a scheduling sequence table from the tool execution sequence, dividing the computing resource quota according to the scheduling sequence table by the resource allocator, and dynamically adjusting the concurrency quantity of the tools by the resource monitor.
The cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method has the advantages that aiming at the problem of interface protocol change caused by upgrading of a core tool version in a monitoring tool chain, the method judges interface difference before and after upgrading through a semantic analysis technology, automatically adjusts a data mapping relation, generates data mapping scripts among tools of different versions according to a preset adaptation rule by utilizing a rule-based reasoning method to realize dynamic data transmission and conversion, and meanwhile, the method continuously optimizes a data field matching model, analyzes the relevance of newly added fields and existing fields, builds a complete data mapping relation, and re-plans a data transmission path among tools by adopting a graph algorithm after deleting a abandoned interface.
The invention also builds a tool chain performance model, analyzes the processing time delay and throughput before and after the interface is changed, optimizes the data flow efficiency by adjusting the size of the data fragments and the transmission batch parameters, and dynamically adjusts the execution sequence and concurrency of the tools, thereby realizing the intelligent adaptation and performance optimization of the tool chain.
Detailed Description
1-2, The method for dynamically cooperating and deploying the enterprise digital tool chain based on cloud computing comprises the following steps:
as shown in fig. 1-2, in step S101, the version change condition of the core tool in the tool chain is monitored, the newly added data field and the abandoned interface information after upgrading are obtained, the difference between the interface protocols before and after upgrading is judged by a semantic analysis method, the data mapping relation is adjusted according to the difference, the adjustment content is obtained, and a data mapping relation change list is generated;
Acquiring a field change set from a version management warehouse according to the serial number of the patch package, and establishing a field change index table through the field change set;
identifying the name and the type of the newly added field in the field change index table by adopting regular matching, and acquiring the semantic feature value of the newly added field through a feature vector calculator;
selecting a field pair with similarity larger than a similarity threshold and closest to the semantic feature value, and judging the type compatibility of the field pair through a data type verifier;
Generating a data migration rule by adopting a data conversion compensator according to the field pair type compatibility, and acquiring a data consistency verification result of the data migration rule by a rule verifier;
marking the reliable grade of the mapping rule according to the verification result, and generating a marking mapping relation list with the reliable grade;
and aiming at the fact that the reliable grade in the mapping relation list with the reliable grade mark is lower than the threshold value mapping rule, carrying out mapping relation rechecking, updating the reliable grade of the mapping rule through a rechecking result, and generating a data mapping change list.
Specifically, in step S101, an upgrade version number, an upgrade date, and an upgrade patch serial number are acquired by using a data acquisition trigger for a monitoring tool chain, a field change set in a patch is acquired from a version management warehouse, and a field change index table is established according to the patch serial number;
Extracting interface change records before and after upgrading from a version management warehouse, identifying the name and the type of a newly added field by adopting regular matching, extracting field semantic feature values by a feature vector calculator, and calculating a similarity matrix of the new field and the old field according to the semantic feature values;
Setting a mapping matching threshold value for a field similarity matrix, selecting a field pair with similarity larger than the threshold value and closest to the threshold value by adopting a maximum similarity matching algorithm, verifying field type compatibility by a data type verifier, and generating an initial mapping rule table according to a matching result;
Acquiring a field mapping rule set from an initial mapping rule table, generating a data migration rule by adopting a data conversion compensator aiming at the abandoned field, checking the data consistency by a rule verifier, and generating the data migration rule set according to the verification result;
Acquiring a complete mapping scheme from a field mapping rule set and a data migration rule set, verifying mapping correctness by adopting a data sample verifier aiming at each mapping rule, marking the reliable grade of the mapping rule according to a verification result, and generating a mapping relation list with a reliable grade mark;
Aiming at the fact that the reliable grade in the mapping relation list with the reliable grade mark is lower than a threshold value mapping rule, a manual confirmation interface is adopted to check the mapping relation, and the reliable grade of the mapping rule is updated through a checking result to generate a data mapping change list;
In the process of monitoring tool chain data acquisition, when the data acquisition trigger acquires upgrade patch packet information, an index relation is established aiming at a patch packet serial number correspondence rule, the patch packet serial number is formed by combination of a year, a month, a day, a minute, a second and a random number, the unique identification comprises a patch packet serial number 20240102143022899, wherein a time stamp is 30 minutes and 22 seconds at 14 of 2024, 01 and 02, and 899 is a random number, and version change records can be rapidly positioned through the patch packet serial number index;
In a version management warehouse, an interface change record comprises field names, field types and default value attribute information, an original interface is defined as a username for a character type user name field, the original interface is changed into a user_name after upgrading, when a field semantic characteristic value is extracted through a characteristic vector calculator, the field is split into independent morphemes by adopting a word segmentation technology, similarity among morphemes is calculated to obtain a field similarity matrix, a similarity value is between 0 and 1, and the larger the value is, the more similar the field semantic is;
In the setting of the mapping matching threshold, selecting a proper threshold based on a service scene, wherein the proper threshold is set to be 0.9 for the complete matching threshold of the fields, the partial matching threshold is set to be 0.7, when the field similarity is larger than the threshold, a mapping relation is established, and a field mapping rule is established for the field similarity of user_name and username of 0.95;
the data type checker verifies whether the new field type and the old field type are compatible, if the original field type is varchar (50), the new field type is varchar (100), and the type compatibility is judged to be capable of establishing mapping;
In the generation of the data migration rule, for the abandoned field phone_no, the fields mobile_phone and telephone are newly added, original phone number data are split and migrated to the corresponding new field according to the rule of the mobile phone and the fixed phone through a data conversion compensator, the mobile phone stores the first 11 digits of 1 into the mobile_phone field, and the fixed phone stores into the telephone field;
In the mapping rule verification stage, a data sample verifier extracts different data distribution sample verification mapping rules, the verification data accounts for not less than 30%, the mapping rules with the verification passing rate higher than 95% are marked as high-reliability grades, the passing rate between 80% and 95% are marked as medium-reliability grades, and the passing rate lower than 80% is marked as low-reliability grades;
When the low-reliability level mapping rule comprises two candidate fields, namely a user_addr mapping to a user_address and a user_location, the mapping relation needs to be checked manually, and a data mapping change list is generated, wherein the data mapping change list comprises a field mapping rule identifier, a source field name, a target field name, a field type, a default value, whether discarding is performed, a data conversion rule and a mapping reliability level attribute, and the change list is used for guiding subsequent data migration work to ensure field mapping accuracy and data integrity in the data migration process.
As shown in fig. 1-2, in step S102, according to the data mapping relation change list, a rule-based reasoning method is adopted to analyze the change content of the interface protocol and the data field, and according to the pre-configured interface protocol adaptation rule, a data mapping script between tools of different versions is generated;
reading an interface change rule set from the data mapping relation change list by using a rule analyzer, and analyzing a field mapping relation by the rule analyzer to obtain a rule grammar tree;
Generating an initial rule relation graph according to the rule grammar tree, and identifying the relation among the rule nodes by the initial rule relation graph through an integrity checker to obtain a node connection integrity verification result;
Selecting a mapping relation between the nodes of the complement rule of the matching rule from a rule template library aiming at the node connection integrity verification result to obtain a rule relation diagram after complement;
And converting the rule relation diagram after completion into a data processing instruction sequence through a mapping template generator, and verifying the data conversion correctness of the data processing instruction sequence through a data sample tester to obtain a data mapping script.
Specifically, in step S102, a mapping rule parser is used to read an interface change rule set from a data mapping relationship change list, a field mapping relationship and a data conversion rule are parsed by the rule parser, a rule syntax tree is generated for the change rule set, and an initial rule relation graph is generated according to the syntax tree node relation;
identifying the relation among the rule nodes by adopting an integrity checker aiming at the initial rule relation graph, judging the connection integrity of the rule nodes by adopting a node association verifier, selecting a matching rule from a rule template library by adopting a rule complement device, and complementing the mapping relation among the rule nodes according to the selected rule template;
Acquiring a complete mapping rule set from the completed rule relation diagram, converting the rule into a data processing instruction sequence through a mapping template generator, generating a data mapping code segment aiming at the instruction sequence, and generating a complete data mapping script according to a code assembler;
carrying out tool version compatibility verification on the data mapping script by adopting a version compatibility checker, verifying the correctness of data conversion by a data sample tester, generating a deployment configuration file by the script aiming at verification, and loading the data mapping script according to the configuration file;
Acquiring deployed mapping scripts from a data mapping script loader, establishing an inter-tool communication pipeline through a data receiving adapter, adjusting data processing speed by adopting a current limiting controller aiming at a data transmission process, and storing a data conversion process according to a data processing log recorder;
The data processing log is monitored in real time by adopting an anomaly monitor, the integrity of the converted data is verified by a data quality checker, the consistency of the source data and the target data is checked by adopting a data account checking device, and the correctness of the data conversion is judged according to the checking result;
When the mapping rule analyzer processes the data mapping relation change list, analyzing field mapping relation and data conversion rule, constructing rule grammar tree, wherein each node contains conversion rule and data field corresponding relation, in the customer information change scene, the original interface field customer_info contains three subfields of name, age and address, after upgrading, the original interface field customer_info is split into two field groups of basic_info and detail_info, and the rule grammar tree records field splitting relation;
In the process of checking the rule integrity, a node association verifier identifies the integrity of the relationship among rule nodes, takes order processing as an example, an order creation node associates an order payment node, a payment node associates an order delivery node to form a complete service link, and when the lack of a payment result processing rule between the order payment node and the order delivery node is found, a rule complement device selects a payment result processing template from a rule template library to supplement the node relationship;
In the stage of generating a data mapping script, a mapping template generator converts a complete rule into a data processing instruction, and aiming at a commodity information synchronization scene, an original interface commodity description field product_desc limit length 500 characters and an updated field description limit length 1000 characters, wherein an instruction sequence comprises field length expansion and data truncation processing operation, and a code assembler assembles a plurality of processing instructions into a complete mapping script;
The version compatibility checking process adopts a data sample tester to verify the accuracy of data conversion, in a sales data processing scene, the accuracy of an original interface amount field amountis 2-bit decimal, the accuracy of an updated field sale_amountis 4-bit decimal, the accuracy of data conversion of the amount data rising accuracy is verified through test data, and script deployment configuration is generated after verification is passed;
In the data transmission process, the current limiting controller adjusts the data processing speed according to the processing capacity of the tool, wherein the processing speed of the log acquisition tool is limited to 1000 pieces per second, and when the transmission speed of an upstream data source reaches 2000 pieces per second, the current limiting controller starts a queue caching mechanism to temporarily store data exceeding the processing capacity into a queue so as to avoid data loss;
In the monitoring of the data processing process, an anomaly monitor detects the data processing anomaly in real time, a data quality checker verifies the correctness of the stock quantity in a stock synchronization scene, an original interface stock field stock_ qty is an integer, an updated field inventory supports decimal, a data reconciliation device checks consistency of the stock quantity before and after conversion, and when the stock difference is found, an anomaly log is recorded and a data correction mechanism is triggered.
As shown in fig. 1-2, in step S103, in the case that the tool version upgrade causes the interface protocol to be changed, the data mapping relationship is adjusted, in the adjustment process, similarity analysis is performed on the newly added field and the existing field, the relevance between the newly added field and the existing field is deduced, the newly added field is incorporated into the data field matcher, a complete data mapping relationship is constructed, and the abandoned interface is deleted;
Reading the new and old version interface definition files through a protocol change analyzer, extracting field names and types by adopting a field attribute analyzer, and generating field feature vectors according to text vector tools;
For the field feature vector, a vector similarity calculator is adopted to calculate cosine similarity between the newly added field and the existing field, and if the similarity is higher than a similarity threshold, a field mapping rule set is generated;
Acquiring a newly added field data sample through a data sampler according to the field mapping rule set, calculating a data distribution feature vector by adopting a feature extractor, and judging the accuracy of the field mapping rule;
aiming at the field mapping rule, identifying the dependency relationship of the abandoned interface by calling the link analyzer, generating a cleaning sequence table by adopting a topology sequencing algorithm, and confirming the disabling state of the cleaning interface according to the calling verifier.
Specifically, in step S103, interface protocol change notification information is obtained from the upgrade tool, a new and old version interface definition file is read by using a protocol change analyzer, a field name, a type, a length and a default value are extracted by a field attribute analyzer, a text vectorization tool is used for calculating semantic features of the field name, and a field feature vector is generated according to the field attribute features and the semantic features;
Acquiring all newly added field data from the field feature vectors, calculating cosine similarity between the newly added field and the existing field feature vectors through a vector similarity calculator, generating a mapping association relation aiming at field pairs with similarity higher than a similarity threshold value, and judging field type compatibility according to a field mapping rule verifier to obtain an initial mapping scheme;
Acquiring a mapping rule set from an initial mapping scheme, acquiring a newly added field data sample set through a data sampler, calculating a data distribution feature vector by adopting a feature extractor, calculating data feature similarity according to a deep neural network, and judging the accuracy of the field mapping rule;
Aiming at the field pairs with the accuracy lower than the threshold value of the mapping rule, updating a field matcher based on the newly added data samples by adopting an increment learning algorithm, evaluating the matching accuracy through a cross verifier, judging whether to continue optimizing according to the accuracy threshold value, and obtaining an optimized mapping scheme;
Acquiring a abandoned interface identifier from an interface call record, identifying the dependency relationship of the abandoned interface by calling a link analyzer, generating a cleaning sequence table aiming at the dependent link by adopting a topology sequencing algorithm, and generating an interface cleaning instruction according to the sequence table;
Aiming at the interface cleaning instruction, adopting a configuration updater to delete the configuration information of the abandoned interface, confirming that the cleaning interface is completely deactivated by calling a verifier, judging the effectiveness of the cleaning result according to the verification result, and generating an interface cleaning report;
When the protocol change analyzer processes the new and old version interface definition files, the field attribute analyzer extracts characteristic information of each field, wherein the characteristic information comprises a commodity name field in a commodity information synchronous interface, the original interface field name is a product_name, the type is a varchar, the length is 50 characters, a default value is empty, the updated field name is changed into item_name, the type is varchar, the length is 100 characters, the default value is empty, text vector chemical analysis field name semantics are performed, and characteristic vector values are calculated;
In the vector similarity calculation process, the degree of similarity among field feature vectors is calculated by adopting cosine similarity, and aiming at an order processing interface, the order state field order_status of the original interface is valued and comprises created, paid and shipped, the updated state field is split into a payment state vector_status and a logistics state shipping _status, and the mapping relation among fields is judged through feature vector calculation, wherein the similarity is higher than 0.8 and is considered to be related;
In the data feature extraction link, a data sampler acquires newly added field sample data, taking a user information synchronization scene as an example, an original interface user address field user_address, splitting the newly added field sample data into three fields of province provice, city and detail address details_address after upgrading, and a feature extractor analyzes the distribution rule of original word address and split address data, and a deep neural network judges the mapping accuracy of the fields based on the data feature similarity;
In the incremental learning process, aiming at a mobile phone number field, an original interface field is mobile_phone, the mobile phone number field is changed into phone_number after upgrading, the initial mapping accuracy is 0.75, an incremental learning algorithm is trained based on 1000 newly added data samples, the cross verification display accuracy is improved to 0.92 and exceeds a 0.9 threshold, and optimization is completed;
In the abandoned interface processing stage, a link analysis is called to identify the interface dependency relationship, taking a payment scene as an example, a payment result inquiry interface depends on a payment state inquiry interface, a state inquiry interface depends on a payment creation interface, a topology ordering generation cleaning sequence is that the result inquiry interface is cleaned firstly, the state inquiry interface is cleaned secondly, and the creation interface is cleaned finally;
In the interface cleaning verification, after the configuration updater deletes the interface configuration, the verifier is called to monitor for 30 minutes, the calling amount of the abandoned interface is confirmed to be 0, no abnormal error is reported, the interface cleaning is successful, and a cleaning report record processing process is generated;
after the mapping scheme is implemented, a mapping relation is established for 95% of fields in the original interface, 5% of fields cannot be mapped due to service change, and the fields are deleted from the list after manual confirmation, so that interface protocol upgrading is completed.
As shown in fig. 1-2, in step S104, after deleting the obsolete interface, a shortest path search algorithm based on a graph is adopted to bypass the deleted obsolete interface, re-plan the data transmission path between tools, and re-form a complete tool chain data transmission path;
The method comprises the steps of sending a detection request to a node in a tool link through a node detector, and acquiring a tool link topological structure according to a comparison result of response time and a response time threshold value of the node;
According to the tool link topological structure, calculating a data flow path among nodes by adopting a connectivity detector, and obtaining a weighted shortest path aiming at the transmission rate and the processing rate of the data flow path;
For the weighted shortest path, calculating a transmission delay score, a packet loss rate score and a load balancing degree score by a path scoring device, and judging an optimal transmission path according to the weight of each dimension score;
and updating a routing configuration table of the node according to the optimal transmission path by adopting a configuration issuing device, acquiring transmission performance indexes through a performance collector, and switching to an alternative path if the presence of abnormal performance indexes is detected.
Specifically, in step S104, a tool link scanner is used to obtain a tool link topology structure, a probe request is sent to each node through a node probe, the node with response time exceeding a preset threshold is marked as an abnormal state, and a tool link topology graph is generated according to a calling relationship between the nodes;
Acquiring node connection relations from the tool link topological graph, calculating data flow paths among nodes through a connectivity detector, adopting a load calculator to count data packet transmission rate and node processing rate aiming at each data transmission path, and calculating a weighted shortest path according to a Di Jie St-Lag algorithm;
Obtaining candidate paths from the weighted shortest path set, carrying out multidimensional scoring on the candidate paths through a path scoring device, respectively calculating scores aiming at three dimensions of transmission delay, packet loss rate and load balancing degree, and calculating a path comprehensive score according to the weight of each dimension;
selecting an optimal transmission path from the path comprehensive score list, transmitting detection data packets through a data packet detector to verify the connectivity of the path, acquiring transmission quality indexes by adopting a data sampler aiming at the communication path, and generating a path availability report according to the transmission quality verification result;
Generating route configuration information by adopting a path configurator aiming at verification passing transmission paths, updating a node route configuration table by adopting a configuration issuing device, checking configuration consistency by adopting a configuration verifier aiming at configuration issuing results, and updating path states according to verification results;
Acquiring transmission path information from the updated route configuration, monitoring data transmission performance indexes through a performance collector, switching to an alternative path by adopting a path switcher aiming at abnormal performance indexes, and updating a route configuration table according to a switching result;
In tool link topology structure scanning, a node detector detects the node state by sending a heartbeat packet, the response time of a normal node is controlled within 200 milliseconds, if the continuous 3 detection responses exceed 500 milliseconds, the node detector is marked as an abnormal state, and in an order processing tool chain, data are transmitted among an order creation node, a payment node and an inventory node through a message queue;
When the data flow path is calculated, a connectivity detector identifies a data transmission channel between nodes, performance indexes of each node are acquired through a load calculator, the processing rate of an order creation node is 1000 orders per second, the transmission rate of a message queue is 2000 messages per second, the processing rate of a payment node is 800 transactions per second, and a Di Jie Tesla algorithm takes node processing delay and transmission delay as path weights to calculate the shortest path;
In the path scoring process, the candidate paths are subjected to multidimensional scoring, the end-to-end delay in the transmission delay dimension is lower than 500 milliseconds for 90 minutes, 500 to 800 milliseconds for 70 minutes and more than 800 milliseconds for 50 minutes;
In the dimension of the packet loss rate, the packet loss rate is lower than 90 minutes of 0.1 percent, the score of 0.1 percent to 0.5 percent is 70 minutes, and the score of more than 0.5 percent is 50 minutes;
In the path verification stage, a data packet detector sends detection packets every 60 seconds to detect path connectivity, a data sampler performs sampling analysis on transmission data, the sampling proportion is one thousandth, data transmission quality indexes including average transmission delay, packet loss rate and data error rate are recorded, when the transmission delay is lower than 800 milliseconds and the packet loss rate is lower than 0.5%, a judgment path is available, in route configuration updating, a configuration issuing device issues a routing table to a node, the routing table comprises a target node identifier, a next hop node and transmission channel information, and the configuration verifier verifies configuration issuing results by comparing the node local routing table with issuing configuration, and completes route updating when the configuration consistency reaches 100%;
In the transmission performance monitoring stage, a performance collector collects performance indexes once every 30 seconds, when the node processing delay is found to exceed a preset threshold value or the number of queue backlog messages exceeds 80% of the capacity of the queue, a path switcher activates an alternative path to switch data traffic to an alternative channel, message sequency is ensured in the switching process, message repetition or loss is avoided, routing configuration is updated after switching is completed, and a switching event is recorded;
under the commodity inventory synchronization scene, a tool chain is formed by the commodity service node, the inventory service node and the order service node, inventory change information is transmitted through the information queue, and when the inventory service node has performance bottleneck, and the information processing delay exceeds 1 second, the path switcher switches the information flow to the standby inventory node, so that real-time synchronization of inventory data is ensured.
As shown in fig. 1-2, in step S105, collecting the data processing delay and resource consumption of the tool nodes in the reformed tool chain data transmission path, analyzing the average processing delay and throughput before and after the interface protocol is changed, judging that the network delay and the broadband limit exist, and if the network delay and the broadband limit exist, acquiring the data fragment size and the transmission batch parameter in the data processing process;
Acquiring processing time delay data and resource consumption data from a tool chain link point by adopting a performance collector, wherein the processing time delay data is generated in a tool chain node processing process;
obtaining performance index data with uniform scale through a data normalization processor according to the processing time delay data and the resource consumption data, wherein the performance index data comprises a processing time delay index and a resource consumption index;
transmitting a detection packet through a network detector to obtain network transmission delay data, wherein the network transmission delay data is obtained by calculation according to the performance index data;
and generating an optimal fragment scale parameter by adopting a data fragment calculator aiming at the network transmission delay data, wherein the optimal fragment scale parameter is calculated according to network bandwidth and node processing capacity.
Specifically, in step S105, a performance collector is adopted to obtain processing delay data and resource consumption data from the tool link points, a data statistics device is used to calculate the processing delay mean value and standard deviation of each node, periodic sampling is performed for the occupation rate of the central processing unit, the memory occupation rate and the disk read-write rate, and resource consumption trend data is generated according to the time sequence processor;
Acquiring performance indexes before and after the interface protocol is changed from the resource consumption trend data, converting different dimension indexes into uniform dimensions through a data normalization processor, identifying performance outliers by adopting a Bayesian inference algorithm aiming at processing time delay and throughput, and constructing a tool chain performance prediction function according to a performance predictor;
acquiring a performance prediction result from a tool chain performance prediction function, sending a detection packet through a network detector to calculate network transmission delay, recording real-time flow data by adopting a flow monitor according to network bandwidth use conditions, and generating a network transmission quality score according to a network quality score device;
Acquiring a network state evaluation result from the network transmission quality score, identifying a transmission bottleneck node through a network bottleneck locator, adopting a resource load calculator to count a processing capacity threshold aiming at the bottleneck node, and generating a network resource allocation scheme according to a load balancer;
Aiming at a network resource allocation scheme, a data slicing calculator is adopted to calculate the optimal slicing scale according to the network bandwidth and the node processing capacity, data batch transmission parameters are set through a transmission batch generator, and the slicing transmission effect is verified according to a data transmission simulator;
Obtaining optimized parameters from the fragment transmission verification result, verifying the rationality of the parameter value range through a parameter constraint checker, updating data transmission configuration by adopting a parameter applicator aiming at verification passing parameters, and recording parameter adjustment effects according to a transmission performance monitor;
Taking a data synchronization scene as an example when the performance collector collects operation data at a tool chain node, sampling node processing time delay 100 times per minute, wherein the average value of the processing time delay of the sampled data is 200 milliseconds, the standard deviation is 50 milliseconds, meanwhile, the node resource consumption data is recorded, the average occupancy rate of a central processing unit is 75%, the memory occupancy rate is 65%, and the disk read-write speed is 120 MB/second;
in the data normalization processing process, various performance indexes are uniformly converted into a 0-1 interval, performance comparison display is carried out before and after the interface protocol is changed, the processing time delay is increased from 200 milliseconds to 350 milliseconds, the data throughput is reduced from 1000 to 600 strokes per second, the Bayesian inference algorithm judges that the processing time delay exceeds 300 milliseconds as an abnormal point, and the constructed performance prediction function shows that the processing capacity is in a decreasing trend;
In network state monitoring, a network detector transmits detection packets every 30 seconds to calculate network delay, the detection result shows that the average transmission delay is 150 milliseconds, the network jitter is within 50 milliseconds, the bandwidth utilization rate recorded by a flow monitor reaches 85%, the network quality score adopts a percentile system, and the current network transmission quality score is 75 minutes;
The network bottleneck recognition process shows that the insufficient processing capacity of the data receiving node leads to data backlog, the threshold value of the processing capacity of the node is 800 messages per second, the generation rate of an upstream data source reaches 1200 messages per second, and the load balancer distributes data flow to the main node and the standby node according to the processing capacity difference in a ratio of 3:7;
The data slicing calculation result shows that according to the bandwidth of 100MB and the capability of processing 800 messages per second of a node, the optimal slicing size is set to 512KB, the number of single-batch transmission messages is controlled within 100, the data transmission simulation test shows that after slicing transmission is adopted, the node processing time delay is reduced to 180 milliseconds, and the data backlog phenomenon is relieved;
In the parameter optimization stage, the parameter constraint checker verifies that the fragment size value range is between 64KB and 1024KB, the batch size value range is between 50 and 200, the transmission performance monitoring after the parameter application shows that the node processing time delay is stabilized within 200 milliseconds, the CPU occupancy rate is reduced to 65%, the memory occupancy rate is reduced to 55%, and the network bandwidth utilization rate is reduced to 70%;
In the order processing tool chain, data are transmitted among the order inquiry service, the order statistics service and the order export service through a message queue, the processing capacity of the order export service in the peak period is insufficient, the message queue is backlogged, after the optimization is carried out, the order processing link is restored to normal operation, and the order export delay is shortened to be within 3 minutes from 10 minutes.
As shown in fig. 1-2, in step S106, the size of the data fragment and the transmission batch parameter are adjusted, the circulation efficiency of the data in the tool chain is optimized, and the execution sequence and concurrency of each tool in the tool chain are dynamically adjusted according to the optimized data circulation scheme;
acquiring node resource monitoring data, and counting the occupancy rate and the memory occupancy rate of a node processor by adopting a resource load calculator according to the monitoring data to obtain a resource load value;
calculating the node processing rate and the transmission rate according to the resource load value by adopting a data processor, and acquiring a tool chain transmission sample corresponding to the transmission rate through a data sampler;
Generating a test data set according to the transmission sample, and recording the data backlog quantity according to the test data set to obtain backlog state data;
checking tool node dependency relations according to the backlog state data through a dependency verifier, calculating priority values according to the dependency relations, and generating a tool execution sequence according to the priority values;
And acquiring a scheduling sequence table from the tool execution sequence, dividing the computing resource quota by adopting a resource allocator, and dynamically adjusting the tool concurrency quantity according to the resource monitor.
Specifically, in step S106, resource monitoring data is obtained from the tool nodes, the occupancy rate and the memory occupancy rate of the central processor of each node are counted by using a resource load calculator, the data processing rate and the transmission rate of the nodes are calculated by using a data processor, the initial fragmentation parameters and the batch parameters are calculated by using a genetic algorithm aiming at the rate data, and a parameter value interval is generated according to the resource capacity threshold;
acquiring the size of the data fragments and the transmission batch value from the parameter value interval, acquiring a tool chain transmission sample through a data sampler, counting the transmission delay and the throughput of the transmission sample by adopting a performance calculator, and adjusting the size of the data fragments and the transmission batch value according to a preset performance index;
carrying out parameter verification on the adjusted parameters by adopting a transmission verifier, constructing a test data set by a data generator, recording the data backlog quantity by adopting a queue monitor for test data transmission, and judging the rationality of the parameters according to backlog conditions;
Acquiring node dependency data from the tool dependency graph, checking the correctness of the dependency through a dependency verifier, calculating a tool priority value by adopting a task priority generator aiming at verification of the passing dependency, and generating a tool execution sequence according to a topology sequencer;
Acquiring a scheduling sequence table from a tool execution sequence, setting a tool concurrency threshold through a concurrency controller, dividing and calculating resource quota by adopting a resource allocator for tool operation, and dynamically adjusting the tool concurrency quantity according to a resource monitor;
Aiming at the running state of the tool, an execution monitor is adopted to track the execution condition of the task, a performance collector is used for recording the processing capacity index of the tool, a load predictor is adopted to calculate the use trend of resources, and the resource configuration of the tool is updated according to the prediction result;
In the process of monitoring tool node resources, taking an order processing tool chain as an example, the occupancy rate of an order creation node central processing unit is 75%, the occupancy rate of a memory is 65%, the data processing rate is 1000 orders per second, the transmission rate is 1500 messages per second, the initial data fragment size is 256KB through iterative computation by a genetic algorithm, the number of single-batch transmission messages is 50, and the upper limit of the resource capacity threshold limiting fragment size is 1024KB;
in the performance calculation stage, a data sampler collects transmission samples every 10 seconds, the sampling result shows that the average transmission time delay is 180 milliseconds, the throughput is 800 orders per second, the transmission time delay is lower than 150 milliseconds compared with the preset performance index, the throughput is not lower than 1000 orders per second, the size of the fragments is adjusted to 128KB according to the difference, and the batch size is increased to 80 orders;
The parameter verification link, the transmission verifier generates a test data set containing 10000 orders of data, the transmission test is carried out under a high concurrence scene, the maximum backlog number of the queue monitoring display message queue is 500 and is lower than 1000 backlog threshold values, and the verification parameter setting is reasonable;
In the tool dependency analysis, an order processing tool chain comprises four nodes, namely order creation, inventory checking, payment processing and logistics delivery, a dependency verifier confirms that an order creation node depends on an inventory checking node, a payment processing node depends on the order creation node, a logistics delivery node depends on the payment processing node, a task priority generator calculates priority according to the dependency depth, and a topological ordering generation execution sequence is generated;
In the resource allocation process, a concurrency controller sets a concurrency threshold according to the processing capacity of the nodes, the concurrency degree of the order creation node is set to 4, the concurrency degree of the inventory inspection node is set to 6, the concurrency degree of the payment processing node is set to 8, the concurrency degree of the logistics delivery node is set to 4, and the resource allocator allocates computing resources according to the proportion of 4:6:8:4;
And in the execution monitoring stage, the performance collector records the processing indexes of each node, the average processing time length of the order creation node is 150 milliseconds, the processing success rate is 99.5 percent, the processing time length of the inventory inspection node is 100 milliseconds, the processing success rate is 99.8 percent, the load predictor predicts that the peak period processing capacity is improved by 30 percent based on the historical data, and the node resource quota is increased in advance.
While the application has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that the foregoing embodiments may be modified or equivalents may be substituted for some of the features thereof, and that the modifications or substitutions do not depart from the spirit and scope of the embodiments of the application.
It will be apparent to those skilled in the art from this disclosure that various other changes and modifications can be made which are within the scope of the application as defined in the appended claims.