[go: up one dir, main page]

CN119621290A - Cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method - Google Patents

Cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method Download PDF

Info

Publication number
CN119621290A
CN119621290A CN202510157478.3A CN202510157478A CN119621290A CN 119621290 A CN119621290 A CN 119621290A CN 202510157478 A CN202510157478 A CN 202510157478A CN 119621290 A CN119621290 A CN 119621290A
Authority
CN
China
Prior art keywords
data
rule
field
mapping
tool chain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510157478.3A
Other languages
Chinese (zh)
Inventor
朱益宏
吴少华
黄斌全
夏冰
宛小伟
梁卓锐
杨洲舟
王世英
刘红
尹少群
张鹏涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Zhongda Management Consulting Group Co ltd
Original Assignee
Guangdong Zhongda Management Consulting Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Zhongda Management Consulting Group Co ltd filed Critical Guangdong Zhongda Management Consulting Group Co ltd
Priority to CN202510157478.3A priority Critical patent/CN119621290A/en
Publication of CN119621290A publication Critical patent/CN119621290A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请涉及一种基于云计算的企业数字化工具链动态协作与部署方法,根据数据映射关系变更清单,采用基于规则的推理方法,对接口协议和数据字段的变更内容进行分析,根据预先配置的接口协议适配规则,生成不同版本工具间的数据映射脚本;删除废弃接口后,采用基于图的最短路径搜索算法,绕过已删除的废弃接口,重新规划工具间的数据传递路径,重新形成完整的工具链数据传递路径;调整数据分片大小和传输批次参数,优化数据在工具链中的流转效率,根据优化后的数据流转方案,动态调整工具链中各工具的执行顺序和并发度。

The present application relates to a cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method. According to the data mapping relationship change list, a rule-based reasoning method is used to analyze the changes in interface protocols and data fields, and according to the pre-configured interface protocol adaptation rules, a data mapping script between tools of different versions is generated; after deleting the obsolete interface, a graph-based shortest path search algorithm is used to bypass the deleted obsolete interface, re-plan the data transfer path between tools, and re-form a complete tool chain data transfer path; adjust the data segment size and transmission batch parameters to optimize the data flow efficiency in the tool chain, and dynamically adjust the execution order and concurrency of each tool in the tool chain according to the optimized data flow plan.

Description

Cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method
Technical Field
The application relates to the technical field of electric digital data, in particular to a cloud computing-based dynamic collaboration and deployment method for an enterprise digital tool chain.
Background
In an enterprise tool chain dynamic collaboration scene, when a certain core tool in the tool chain is updated to cause the change of an upstream interface protocol and a downstream interface protocol, how to efficiently adjust the data mapping relationship between tools is a technical problem to be solved;
The newly added data fields and the abandoned interfaces bring great challenges to an automatic adjustment mechanism of the tool chain collaboration system, the system needs to accurately identify the influence range of interface change, and dynamically generate a new mapping rule according to business logic and data dependency relationship, the process involves complex data structure comparison, semantic understanding and logic reasoning, high requirements are put forward on the intelligent level of the algorithm, and meanwhile, frequent interface change can also lead to continuous update and adjustment of the mapping rule, so that the execution efficiency of the whole tool chain is affected;
How to minimize the influence of mapping adjustment on the performance of a tool chain on the premise of ensuring the consistency and the correctness of data is a technical problem needing intensive research and trade-off;
The method requires that the collaborative system can fully consider the priority and the calling frequency of data dependence in the mapping adjustment process, reasonably set an adjustment strategy and a time window, and pertinently optimize the mapping rule on the critical path, so that an optimal balance point is found between dynamic adaptation and efficient execution, and the stable operation of a tool chain is ensured.
Disclosure of Invention
In order to solve the problems in the prior art, the application aims to provide a cloud computing-based dynamic collaboration and deployment method for an enterprise digital tool chain.
The application discloses a cloud computing-based dynamic collaboration and deployment method for an enterprise digital tool chain, which comprises the following steps:
S101, monitoring version change conditions of core tools in a tool chain, acquiring newly added data fields and abandoned interface information after upgrading, judging differences of interface protocols before and after upgrading through a semantic analysis method, adjusting data mapping relations according to the differences, obtaining adjustment contents, and generating a data mapping relation change list;
S102, analyzing the change content of interface protocol and data field by adopting a rule-based reasoning method according to the data mapping relation change list in the step S101, and generating data mapping scripts among different versions of tools according to the pre-configured interface protocol adaptation rule;
S103, under the condition that the tool version is updated to cause the interface protocol in the step S102 to be changed, adjusting the data mapping relation, in the adjustment process, carrying out similarity analysis on the newly added field and the existing field, deducing the relevance between the newly added field and the existing field, incorporating the newly added field into a data field matcher, constructing the data mapping relation, and deleting the abandoned interface;
S104, after deleting the abandoned interfaces in the step S103, adopting a shortest path search algorithm based on a graph to bypass the deleted abandoned interfaces, re-planning a data transmission path among tools, and re-forming a tool chain data transmission path;
S105, collecting data processing time delay and resource consumption of tool nodes in the reformed tool chain data transmission path in the step S104, analyzing average processing time delay and throughput before and after interface protocol change, judging that network delay and broadband limitation exist, and if so, acquiring data fragment size and transmission batch parameters in the data processing process;
S106, optimizing the circulation efficiency of the data in the tool chain according to the size of the data fragments and the transmission batch parameters in the step S105, and dynamically adjusting the execution sequence and concurrency of each tool in the tool chain according to the optimized data circulation scheme.
Preferably, in the step S101, a field change set is extracted from a version management warehouse according to a patch package serial number, a field change index table is established, the name and the type of a newly added field in the index table are identified through regular matching, a feature vector calculator is utilized to obtain a semantic feature value, a field pair with similarity larger than a threshold value is selected, the type compatibility of the field is judged through a data type verifier, a data migration rule is generated through a data conversion compensator based on the type compatibility, and data consistency verification is performed through a rule verifier, so that the reliability level of the mapping rule is marked, and a mapping relation list with a reliability level mark is generated;
And checking the mapping rule with the reliability level lower than the threshold value in the mapping relation list with the reliability level mark, updating the reliability level of the mapping rule according to the checking result, and generating a data mapping change list.
Preferably, in the step S102, the rule parser is used to read the interface change rule set from the data mapping relation change list, and the field mapping relation is parsed by the rule interpreter to construct a rule grammar tree;
Generating an initial rule relation graph based on the rule grammar tree, and verifying the integrity of node connection through an integrity checker;
and selecting a matching rule from a rule template library to complement the mapping relation among the rule nodes according to the verification result of the integrity checker, obtaining a complemented rule relation diagram, converting the complemented rule relation diagram into a data processing instruction sequence by using a mapping template generator, and verifying the correctness of data conversion by using a data sample tester to obtain a data mapping script.
Preferably, in the step S103, the new and old version interface definition files are read by the protocol change parser, the field names and types are extracted, the field feature vectors are generated by using the text vectorization tool, and the cosine similarity between the newly added field and the existing field is calculated;
If the similarity exceeds the threshold value, generating a field mapping rule set;
if the similarity does not exceed the threshold value, a field mapping rule set is not generated;
And acquiring a newly added field data sample by using a data sampler, calculating a data distribution feature vector to judge the accuracy of a field mapping rule, identifying the dependency relationship of the abandoned interface by calling a link analyzer, generating a cleaning sequence table by using a topology sequencing algorithm, and confirming the disabling state of the cleaning interface by calling a verifier.
Preferably, in the step S104, a node detector sends a probe request to a node in the tool link, and the topology structure of the tool link is obtained according to the comparison between the response time of the node and a preset response time threshold;
calculating data flow paths among the nodes by using a connectivity detector, and determining weighted shortest paths for the transmission rates and the processing rates of the paths;
The path scoring device calculates the scores of the transmission delay, the packet loss rate and the load balancing degree according to the weighted shortest paths, and judges the optimal transmission path according to the weights of the scores;
And the configuration issuing device updates a routing configuration table of the node according to the optimal transmission path, meanwhile, the performance collector is responsible for collecting transmission performance indexes, and when the presence of abnormal performance indexes is detected, the node is switched to an alternative path.
Preferably, in the step S105, processing delay data and resource consumption data are obtained from a tool chain node by using a performance collector, and a data normalization processor is utilized to obtain performance index data with uniform scale according to the processing delay data and the resource consumption data, where the performance index data includes a processing delay index and a resource consumption index;
transmitting a detection packet through a network detector to acquire network transmission delay data;
And aiming at the network transmission delay data, obtaining the optimal fragment scale parameter by using a data fragment calculator.
Preferably, in the step S106, node resource monitoring data is obtained, and the occupancy rates of the node processor and the memory are counted by using a resource load calculator to obtain a resource load value;
Then, the data processor calculates the processing rate and the transmission rate of the node according to the resource load value, and obtains a transmission sample of a corresponding tool chain through a data sampler;
generating a test data set by using the transmission sample, and recording the backlog quantity of the data to obtain backlog state data;
The dependency verifier checks the dependency relationship of the tool nodes according to the backlog state data, and calculates a priority value to obtain a tool execution sequence;
and acquiring a scheduling sequence table from the tool execution sequence, dividing the computing resource quota according to the scheduling sequence table by the resource allocator, and dynamically adjusting the concurrency quantity of the tools by the resource monitor.
The cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method has the advantages that aiming at the problem of interface protocol change caused by upgrading of a core tool version in a monitoring tool chain, the method judges interface difference before and after upgrading through a semantic analysis technology, automatically adjusts a data mapping relation, generates data mapping scripts among tools of different versions according to a preset adaptation rule by utilizing a rule-based reasoning method to realize dynamic data transmission and conversion, and meanwhile, the method continuously optimizes a data field matching model, analyzes the relevance of newly added fields and existing fields, builds a complete data mapping relation, and re-plans a data transmission path among tools by adopting a graph algorithm after deleting a abandoned interface.
The invention also builds a tool chain performance model, analyzes the processing time delay and throughput before and after the interface is changed, optimizes the data flow efficiency by adjusting the size of the data fragments and the transmission batch parameters, and dynamically adjusts the execution sequence and concurrency of the tools, thereby realizing the intelligent adaptation and performance optimization of the tool chain.
Drawings
FIG. 1 is a flowchart of a method for dynamic collaboration and deployment of an enterprise digital toolchain based on cloud computing according to the present application;
FIG. 2 is a second flowchart of a method for dynamic collaboration and deployment of an enterprise digital toolchain based on cloud computing according to the present application.
Detailed Description
1-2, The method for dynamically cooperating and deploying the enterprise digital tool chain based on cloud computing comprises the following steps:
as shown in fig. 1-2, in step S101, the version change condition of the core tool in the tool chain is monitored, the newly added data field and the abandoned interface information after upgrading are obtained, the difference between the interface protocols before and after upgrading is judged by a semantic analysis method, the data mapping relation is adjusted according to the difference, the adjustment content is obtained, and a data mapping relation change list is generated;
Acquiring a field change set from a version management warehouse according to the serial number of the patch package, and establishing a field change index table through the field change set;
identifying the name and the type of the newly added field in the field change index table by adopting regular matching, and acquiring the semantic feature value of the newly added field through a feature vector calculator;
selecting a field pair with similarity larger than a similarity threshold and closest to the semantic feature value, and judging the type compatibility of the field pair through a data type verifier;
Generating a data migration rule by adopting a data conversion compensator according to the field pair type compatibility, and acquiring a data consistency verification result of the data migration rule by a rule verifier;
marking the reliable grade of the mapping rule according to the verification result, and generating a marking mapping relation list with the reliable grade;
and aiming at the fact that the reliable grade in the mapping relation list with the reliable grade mark is lower than the threshold value mapping rule, carrying out mapping relation rechecking, updating the reliable grade of the mapping rule through a rechecking result, and generating a data mapping change list.
Specifically, in step S101, an upgrade version number, an upgrade date, and an upgrade patch serial number are acquired by using a data acquisition trigger for a monitoring tool chain, a field change set in a patch is acquired from a version management warehouse, and a field change index table is established according to the patch serial number;
Extracting interface change records before and after upgrading from a version management warehouse, identifying the name and the type of a newly added field by adopting regular matching, extracting field semantic feature values by a feature vector calculator, and calculating a similarity matrix of the new field and the old field according to the semantic feature values;
Setting a mapping matching threshold value for a field similarity matrix, selecting a field pair with similarity larger than the threshold value and closest to the threshold value by adopting a maximum similarity matching algorithm, verifying field type compatibility by a data type verifier, and generating an initial mapping rule table according to a matching result;
Acquiring a field mapping rule set from an initial mapping rule table, generating a data migration rule by adopting a data conversion compensator aiming at the abandoned field, checking the data consistency by a rule verifier, and generating the data migration rule set according to the verification result;
Acquiring a complete mapping scheme from a field mapping rule set and a data migration rule set, verifying mapping correctness by adopting a data sample verifier aiming at each mapping rule, marking the reliable grade of the mapping rule according to a verification result, and generating a mapping relation list with a reliable grade mark;
Aiming at the fact that the reliable grade in the mapping relation list with the reliable grade mark is lower than a threshold value mapping rule, a manual confirmation interface is adopted to check the mapping relation, and the reliable grade of the mapping rule is updated through a checking result to generate a data mapping change list;
In the process of monitoring tool chain data acquisition, when the data acquisition trigger acquires upgrade patch packet information, an index relation is established aiming at a patch packet serial number correspondence rule, the patch packet serial number is formed by combination of a year, a month, a day, a minute, a second and a random number, the unique identification comprises a patch packet serial number 20240102143022899, wherein a time stamp is 30 minutes and 22 seconds at 14 of 2024, 01 and 02, and 899 is a random number, and version change records can be rapidly positioned through the patch packet serial number index;
In a version management warehouse, an interface change record comprises field names, field types and default value attribute information, an original interface is defined as a username for a character type user name field, the original interface is changed into a user_name after upgrading, when a field semantic characteristic value is extracted through a characteristic vector calculator, the field is split into independent morphemes by adopting a word segmentation technology, similarity among morphemes is calculated to obtain a field similarity matrix, a similarity value is between 0 and 1, and the larger the value is, the more similar the field semantic is;
In the setting of the mapping matching threshold, selecting a proper threshold based on a service scene, wherein the proper threshold is set to be 0.9 for the complete matching threshold of the fields, the partial matching threshold is set to be 0.7, when the field similarity is larger than the threshold, a mapping relation is established, and a field mapping rule is established for the field similarity of user_name and username of 0.95;
the data type checker verifies whether the new field type and the old field type are compatible, if the original field type is varchar (50), the new field type is varchar (100), and the type compatibility is judged to be capable of establishing mapping;
In the generation of the data migration rule, for the abandoned field phone_no, the fields mobile_phone and telephone are newly added, original phone number data are split and migrated to the corresponding new field according to the rule of the mobile phone and the fixed phone through a data conversion compensator, the mobile phone stores the first 11 digits of 1 into the mobile_phone field, and the fixed phone stores into the telephone field;
In the mapping rule verification stage, a data sample verifier extracts different data distribution sample verification mapping rules, the verification data accounts for not less than 30%, the mapping rules with the verification passing rate higher than 95% are marked as high-reliability grades, the passing rate between 80% and 95% are marked as medium-reliability grades, and the passing rate lower than 80% is marked as low-reliability grades;
When the low-reliability level mapping rule comprises two candidate fields, namely a user_addr mapping to a user_address and a user_location, the mapping relation needs to be checked manually, and a data mapping change list is generated, wherein the data mapping change list comprises a field mapping rule identifier, a source field name, a target field name, a field type, a default value, whether discarding is performed, a data conversion rule and a mapping reliability level attribute, and the change list is used for guiding subsequent data migration work to ensure field mapping accuracy and data integrity in the data migration process.
As shown in fig. 1-2, in step S102, according to the data mapping relation change list, a rule-based reasoning method is adopted to analyze the change content of the interface protocol and the data field, and according to the pre-configured interface protocol adaptation rule, a data mapping script between tools of different versions is generated;
reading an interface change rule set from the data mapping relation change list by using a rule analyzer, and analyzing a field mapping relation by the rule analyzer to obtain a rule grammar tree;
Generating an initial rule relation graph according to the rule grammar tree, and identifying the relation among the rule nodes by the initial rule relation graph through an integrity checker to obtain a node connection integrity verification result;
Selecting a mapping relation between the nodes of the complement rule of the matching rule from a rule template library aiming at the node connection integrity verification result to obtain a rule relation diagram after complement;
And converting the rule relation diagram after completion into a data processing instruction sequence through a mapping template generator, and verifying the data conversion correctness of the data processing instruction sequence through a data sample tester to obtain a data mapping script.
Specifically, in step S102, a mapping rule parser is used to read an interface change rule set from a data mapping relationship change list, a field mapping relationship and a data conversion rule are parsed by the rule parser, a rule syntax tree is generated for the change rule set, and an initial rule relation graph is generated according to the syntax tree node relation;
identifying the relation among the rule nodes by adopting an integrity checker aiming at the initial rule relation graph, judging the connection integrity of the rule nodes by adopting a node association verifier, selecting a matching rule from a rule template library by adopting a rule complement device, and complementing the mapping relation among the rule nodes according to the selected rule template;
Acquiring a complete mapping rule set from the completed rule relation diagram, converting the rule into a data processing instruction sequence through a mapping template generator, generating a data mapping code segment aiming at the instruction sequence, and generating a complete data mapping script according to a code assembler;
carrying out tool version compatibility verification on the data mapping script by adopting a version compatibility checker, verifying the correctness of data conversion by a data sample tester, generating a deployment configuration file by the script aiming at verification, and loading the data mapping script according to the configuration file;
Acquiring deployed mapping scripts from a data mapping script loader, establishing an inter-tool communication pipeline through a data receiving adapter, adjusting data processing speed by adopting a current limiting controller aiming at a data transmission process, and storing a data conversion process according to a data processing log recorder;
The data processing log is monitored in real time by adopting an anomaly monitor, the integrity of the converted data is verified by a data quality checker, the consistency of the source data and the target data is checked by adopting a data account checking device, and the correctness of the data conversion is judged according to the checking result;
When the mapping rule analyzer processes the data mapping relation change list, analyzing field mapping relation and data conversion rule, constructing rule grammar tree, wherein each node contains conversion rule and data field corresponding relation, in the customer information change scene, the original interface field customer_info contains three subfields of name, age and address, after upgrading, the original interface field customer_info is split into two field groups of basic_info and detail_info, and the rule grammar tree records field splitting relation;
In the process of checking the rule integrity, a node association verifier identifies the integrity of the relationship among rule nodes, takes order processing as an example, an order creation node associates an order payment node, a payment node associates an order delivery node to form a complete service link, and when the lack of a payment result processing rule between the order payment node and the order delivery node is found, a rule complement device selects a payment result processing template from a rule template library to supplement the node relationship;
In the stage of generating a data mapping script, a mapping template generator converts a complete rule into a data processing instruction, and aiming at a commodity information synchronization scene, an original interface commodity description field product_desc limit length 500 characters and an updated field description limit length 1000 characters, wherein an instruction sequence comprises field length expansion and data truncation processing operation, and a code assembler assembles a plurality of processing instructions into a complete mapping script;
The version compatibility checking process adopts a data sample tester to verify the accuracy of data conversion, in a sales data processing scene, the accuracy of an original interface amount field amountis 2-bit decimal, the accuracy of an updated field sale_amountis 4-bit decimal, the accuracy of data conversion of the amount data rising accuracy is verified through test data, and script deployment configuration is generated after verification is passed;
In the data transmission process, the current limiting controller adjusts the data processing speed according to the processing capacity of the tool, wherein the processing speed of the log acquisition tool is limited to 1000 pieces per second, and when the transmission speed of an upstream data source reaches 2000 pieces per second, the current limiting controller starts a queue caching mechanism to temporarily store data exceeding the processing capacity into a queue so as to avoid data loss;
In the monitoring of the data processing process, an anomaly monitor detects the data processing anomaly in real time, a data quality checker verifies the correctness of the stock quantity in a stock synchronization scene, an original interface stock field stock_ qty is an integer, an updated field inventory supports decimal, a data reconciliation device checks consistency of the stock quantity before and after conversion, and when the stock difference is found, an anomaly log is recorded and a data correction mechanism is triggered.
As shown in fig. 1-2, in step S103, in the case that the tool version upgrade causes the interface protocol to be changed, the data mapping relationship is adjusted, in the adjustment process, similarity analysis is performed on the newly added field and the existing field, the relevance between the newly added field and the existing field is deduced, the newly added field is incorporated into the data field matcher, a complete data mapping relationship is constructed, and the abandoned interface is deleted;
Reading the new and old version interface definition files through a protocol change analyzer, extracting field names and types by adopting a field attribute analyzer, and generating field feature vectors according to text vector tools;
For the field feature vector, a vector similarity calculator is adopted to calculate cosine similarity between the newly added field and the existing field, and if the similarity is higher than a similarity threshold, a field mapping rule set is generated;
Acquiring a newly added field data sample through a data sampler according to the field mapping rule set, calculating a data distribution feature vector by adopting a feature extractor, and judging the accuracy of the field mapping rule;
aiming at the field mapping rule, identifying the dependency relationship of the abandoned interface by calling the link analyzer, generating a cleaning sequence table by adopting a topology sequencing algorithm, and confirming the disabling state of the cleaning interface according to the calling verifier.
Specifically, in step S103, interface protocol change notification information is obtained from the upgrade tool, a new and old version interface definition file is read by using a protocol change analyzer, a field name, a type, a length and a default value are extracted by a field attribute analyzer, a text vectorization tool is used for calculating semantic features of the field name, and a field feature vector is generated according to the field attribute features and the semantic features;
Acquiring all newly added field data from the field feature vectors, calculating cosine similarity between the newly added field and the existing field feature vectors through a vector similarity calculator, generating a mapping association relation aiming at field pairs with similarity higher than a similarity threshold value, and judging field type compatibility according to a field mapping rule verifier to obtain an initial mapping scheme;
Acquiring a mapping rule set from an initial mapping scheme, acquiring a newly added field data sample set through a data sampler, calculating a data distribution feature vector by adopting a feature extractor, calculating data feature similarity according to a deep neural network, and judging the accuracy of the field mapping rule;
Aiming at the field pairs with the accuracy lower than the threshold value of the mapping rule, updating a field matcher based on the newly added data samples by adopting an increment learning algorithm, evaluating the matching accuracy through a cross verifier, judging whether to continue optimizing according to the accuracy threshold value, and obtaining an optimized mapping scheme;
Acquiring a abandoned interface identifier from an interface call record, identifying the dependency relationship of the abandoned interface by calling a link analyzer, generating a cleaning sequence table aiming at the dependent link by adopting a topology sequencing algorithm, and generating an interface cleaning instruction according to the sequence table;
Aiming at the interface cleaning instruction, adopting a configuration updater to delete the configuration information of the abandoned interface, confirming that the cleaning interface is completely deactivated by calling a verifier, judging the effectiveness of the cleaning result according to the verification result, and generating an interface cleaning report;
When the protocol change analyzer processes the new and old version interface definition files, the field attribute analyzer extracts characteristic information of each field, wherein the characteristic information comprises a commodity name field in a commodity information synchronous interface, the original interface field name is a product_name, the type is a varchar, the length is 50 characters, a default value is empty, the updated field name is changed into item_name, the type is varchar, the length is 100 characters, the default value is empty, text vector chemical analysis field name semantics are performed, and characteristic vector values are calculated;
In the vector similarity calculation process, the degree of similarity among field feature vectors is calculated by adopting cosine similarity, and aiming at an order processing interface, the order state field order_status of the original interface is valued and comprises created, paid and shipped, the updated state field is split into a payment state vector_status and a logistics state shipping _status, and the mapping relation among fields is judged through feature vector calculation, wherein the similarity is higher than 0.8 and is considered to be related;
In the data feature extraction link, a data sampler acquires newly added field sample data, taking a user information synchronization scene as an example, an original interface user address field user_address, splitting the newly added field sample data into three fields of province provice, city and detail address details_address after upgrading, and a feature extractor analyzes the distribution rule of original word address and split address data, and a deep neural network judges the mapping accuracy of the fields based on the data feature similarity;
In the incremental learning process, aiming at a mobile phone number field, an original interface field is mobile_phone, the mobile phone number field is changed into phone_number after upgrading, the initial mapping accuracy is 0.75, an incremental learning algorithm is trained based on 1000 newly added data samples, the cross verification display accuracy is improved to 0.92 and exceeds a 0.9 threshold, and optimization is completed;
In the abandoned interface processing stage, a link analysis is called to identify the interface dependency relationship, taking a payment scene as an example, a payment result inquiry interface depends on a payment state inquiry interface, a state inquiry interface depends on a payment creation interface, a topology ordering generation cleaning sequence is that the result inquiry interface is cleaned firstly, the state inquiry interface is cleaned secondly, and the creation interface is cleaned finally;
In the interface cleaning verification, after the configuration updater deletes the interface configuration, the verifier is called to monitor for 30 minutes, the calling amount of the abandoned interface is confirmed to be 0, no abnormal error is reported, the interface cleaning is successful, and a cleaning report record processing process is generated;
after the mapping scheme is implemented, a mapping relation is established for 95% of fields in the original interface, 5% of fields cannot be mapped due to service change, and the fields are deleted from the list after manual confirmation, so that interface protocol upgrading is completed.
As shown in fig. 1-2, in step S104, after deleting the obsolete interface, a shortest path search algorithm based on a graph is adopted to bypass the deleted obsolete interface, re-plan the data transmission path between tools, and re-form a complete tool chain data transmission path;
The method comprises the steps of sending a detection request to a node in a tool link through a node detector, and acquiring a tool link topological structure according to a comparison result of response time and a response time threshold value of the node;
According to the tool link topological structure, calculating a data flow path among nodes by adopting a connectivity detector, and obtaining a weighted shortest path aiming at the transmission rate and the processing rate of the data flow path;
For the weighted shortest path, calculating a transmission delay score, a packet loss rate score and a load balancing degree score by a path scoring device, and judging an optimal transmission path according to the weight of each dimension score;
and updating a routing configuration table of the node according to the optimal transmission path by adopting a configuration issuing device, acquiring transmission performance indexes through a performance collector, and switching to an alternative path if the presence of abnormal performance indexes is detected.
Specifically, in step S104, a tool link scanner is used to obtain a tool link topology structure, a probe request is sent to each node through a node probe, the node with response time exceeding a preset threshold is marked as an abnormal state, and a tool link topology graph is generated according to a calling relationship between the nodes;
Acquiring node connection relations from the tool link topological graph, calculating data flow paths among nodes through a connectivity detector, adopting a load calculator to count data packet transmission rate and node processing rate aiming at each data transmission path, and calculating a weighted shortest path according to a Di Jie St-Lag algorithm;
Obtaining candidate paths from the weighted shortest path set, carrying out multidimensional scoring on the candidate paths through a path scoring device, respectively calculating scores aiming at three dimensions of transmission delay, packet loss rate and load balancing degree, and calculating a path comprehensive score according to the weight of each dimension;
selecting an optimal transmission path from the path comprehensive score list, transmitting detection data packets through a data packet detector to verify the connectivity of the path, acquiring transmission quality indexes by adopting a data sampler aiming at the communication path, and generating a path availability report according to the transmission quality verification result;
Generating route configuration information by adopting a path configurator aiming at verification passing transmission paths, updating a node route configuration table by adopting a configuration issuing device, checking configuration consistency by adopting a configuration verifier aiming at configuration issuing results, and updating path states according to verification results;
Acquiring transmission path information from the updated route configuration, monitoring data transmission performance indexes through a performance collector, switching to an alternative path by adopting a path switcher aiming at abnormal performance indexes, and updating a route configuration table according to a switching result;
In tool link topology structure scanning, a node detector detects the node state by sending a heartbeat packet, the response time of a normal node is controlled within 200 milliseconds, if the continuous 3 detection responses exceed 500 milliseconds, the node detector is marked as an abnormal state, and in an order processing tool chain, data are transmitted among an order creation node, a payment node and an inventory node through a message queue;
When the data flow path is calculated, a connectivity detector identifies a data transmission channel between nodes, performance indexes of each node are acquired through a load calculator, the processing rate of an order creation node is 1000 orders per second, the transmission rate of a message queue is 2000 messages per second, the processing rate of a payment node is 800 transactions per second, and a Di Jie Tesla algorithm takes node processing delay and transmission delay as path weights to calculate the shortest path;
In the path scoring process, the candidate paths are subjected to multidimensional scoring, the end-to-end delay in the transmission delay dimension is lower than 500 milliseconds for 90 minutes, 500 to 800 milliseconds for 70 minutes and more than 800 milliseconds for 50 minutes;
In the dimension of the packet loss rate, the packet loss rate is lower than 90 minutes of 0.1 percent, the score of 0.1 percent to 0.5 percent is 70 minutes, and the score of more than 0.5 percent is 50 minutes;
In the path verification stage, a data packet detector sends detection packets every 60 seconds to detect path connectivity, a data sampler performs sampling analysis on transmission data, the sampling proportion is one thousandth, data transmission quality indexes including average transmission delay, packet loss rate and data error rate are recorded, when the transmission delay is lower than 800 milliseconds and the packet loss rate is lower than 0.5%, a judgment path is available, in route configuration updating, a configuration issuing device issues a routing table to a node, the routing table comprises a target node identifier, a next hop node and transmission channel information, and the configuration verifier verifies configuration issuing results by comparing the node local routing table with issuing configuration, and completes route updating when the configuration consistency reaches 100%;
In the transmission performance monitoring stage, a performance collector collects performance indexes once every 30 seconds, when the node processing delay is found to exceed a preset threshold value or the number of queue backlog messages exceeds 80% of the capacity of the queue, a path switcher activates an alternative path to switch data traffic to an alternative channel, message sequency is ensured in the switching process, message repetition or loss is avoided, routing configuration is updated after switching is completed, and a switching event is recorded;
under the commodity inventory synchronization scene, a tool chain is formed by the commodity service node, the inventory service node and the order service node, inventory change information is transmitted through the information queue, and when the inventory service node has performance bottleneck, and the information processing delay exceeds 1 second, the path switcher switches the information flow to the standby inventory node, so that real-time synchronization of inventory data is ensured.
As shown in fig. 1-2, in step S105, collecting the data processing delay and resource consumption of the tool nodes in the reformed tool chain data transmission path, analyzing the average processing delay and throughput before and after the interface protocol is changed, judging that the network delay and the broadband limit exist, and if the network delay and the broadband limit exist, acquiring the data fragment size and the transmission batch parameter in the data processing process;
Acquiring processing time delay data and resource consumption data from a tool chain link point by adopting a performance collector, wherein the processing time delay data is generated in a tool chain node processing process;
obtaining performance index data with uniform scale through a data normalization processor according to the processing time delay data and the resource consumption data, wherein the performance index data comprises a processing time delay index and a resource consumption index;
transmitting a detection packet through a network detector to obtain network transmission delay data, wherein the network transmission delay data is obtained by calculation according to the performance index data;
and generating an optimal fragment scale parameter by adopting a data fragment calculator aiming at the network transmission delay data, wherein the optimal fragment scale parameter is calculated according to network bandwidth and node processing capacity.
Specifically, in step S105, a performance collector is adopted to obtain processing delay data and resource consumption data from the tool link points, a data statistics device is used to calculate the processing delay mean value and standard deviation of each node, periodic sampling is performed for the occupation rate of the central processing unit, the memory occupation rate and the disk read-write rate, and resource consumption trend data is generated according to the time sequence processor;
Acquiring performance indexes before and after the interface protocol is changed from the resource consumption trend data, converting different dimension indexes into uniform dimensions through a data normalization processor, identifying performance outliers by adopting a Bayesian inference algorithm aiming at processing time delay and throughput, and constructing a tool chain performance prediction function according to a performance predictor;
acquiring a performance prediction result from a tool chain performance prediction function, sending a detection packet through a network detector to calculate network transmission delay, recording real-time flow data by adopting a flow monitor according to network bandwidth use conditions, and generating a network transmission quality score according to a network quality score device;
Acquiring a network state evaluation result from the network transmission quality score, identifying a transmission bottleneck node through a network bottleneck locator, adopting a resource load calculator to count a processing capacity threshold aiming at the bottleneck node, and generating a network resource allocation scheme according to a load balancer;
Aiming at a network resource allocation scheme, a data slicing calculator is adopted to calculate the optimal slicing scale according to the network bandwidth and the node processing capacity, data batch transmission parameters are set through a transmission batch generator, and the slicing transmission effect is verified according to a data transmission simulator;
Obtaining optimized parameters from the fragment transmission verification result, verifying the rationality of the parameter value range through a parameter constraint checker, updating data transmission configuration by adopting a parameter applicator aiming at verification passing parameters, and recording parameter adjustment effects according to a transmission performance monitor;
Taking a data synchronization scene as an example when the performance collector collects operation data at a tool chain node, sampling node processing time delay 100 times per minute, wherein the average value of the processing time delay of the sampled data is 200 milliseconds, the standard deviation is 50 milliseconds, meanwhile, the node resource consumption data is recorded, the average occupancy rate of a central processing unit is 75%, the memory occupancy rate is 65%, and the disk read-write speed is 120 MB/second;
in the data normalization processing process, various performance indexes are uniformly converted into a 0-1 interval, performance comparison display is carried out before and after the interface protocol is changed, the processing time delay is increased from 200 milliseconds to 350 milliseconds, the data throughput is reduced from 1000 to 600 strokes per second, the Bayesian inference algorithm judges that the processing time delay exceeds 300 milliseconds as an abnormal point, and the constructed performance prediction function shows that the processing capacity is in a decreasing trend;
In network state monitoring, a network detector transmits detection packets every 30 seconds to calculate network delay, the detection result shows that the average transmission delay is 150 milliseconds, the network jitter is within 50 milliseconds, the bandwidth utilization rate recorded by a flow monitor reaches 85%, the network quality score adopts a percentile system, and the current network transmission quality score is 75 minutes;
The network bottleneck recognition process shows that the insufficient processing capacity of the data receiving node leads to data backlog, the threshold value of the processing capacity of the node is 800 messages per second, the generation rate of an upstream data source reaches 1200 messages per second, and the load balancer distributes data flow to the main node and the standby node according to the processing capacity difference in a ratio of 3:7;
The data slicing calculation result shows that according to the bandwidth of 100MB and the capability of processing 800 messages per second of a node, the optimal slicing size is set to 512KB, the number of single-batch transmission messages is controlled within 100, the data transmission simulation test shows that after slicing transmission is adopted, the node processing time delay is reduced to 180 milliseconds, and the data backlog phenomenon is relieved;
In the parameter optimization stage, the parameter constraint checker verifies that the fragment size value range is between 64KB and 1024KB, the batch size value range is between 50 and 200, the transmission performance monitoring after the parameter application shows that the node processing time delay is stabilized within 200 milliseconds, the CPU occupancy rate is reduced to 65%, the memory occupancy rate is reduced to 55%, and the network bandwidth utilization rate is reduced to 70%;
In the order processing tool chain, data are transmitted among the order inquiry service, the order statistics service and the order export service through a message queue, the processing capacity of the order export service in the peak period is insufficient, the message queue is backlogged, after the optimization is carried out, the order processing link is restored to normal operation, and the order export delay is shortened to be within 3 minutes from 10 minutes.
As shown in fig. 1-2, in step S106, the size of the data fragment and the transmission batch parameter are adjusted, the circulation efficiency of the data in the tool chain is optimized, and the execution sequence and concurrency of each tool in the tool chain are dynamically adjusted according to the optimized data circulation scheme;
acquiring node resource monitoring data, and counting the occupancy rate and the memory occupancy rate of a node processor by adopting a resource load calculator according to the monitoring data to obtain a resource load value;
calculating the node processing rate and the transmission rate according to the resource load value by adopting a data processor, and acquiring a tool chain transmission sample corresponding to the transmission rate through a data sampler;
Generating a test data set according to the transmission sample, and recording the data backlog quantity according to the test data set to obtain backlog state data;
checking tool node dependency relations according to the backlog state data through a dependency verifier, calculating priority values according to the dependency relations, and generating a tool execution sequence according to the priority values;
And acquiring a scheduling sequence table from the tool execution sequence, dividing the computing resource quota by adopting a resource allocator, and dynamically adjusting the tool concurrency quantity according to the resource monitor.
Specifically, in step S106, resource monitoring data is obtained from the tool nodes, the occupancy rate and the memory occupancy rate of the central processor of each node are counted by using a resource load calculator, the data processing rate and the transmission rate of the nodes are calculated by using a data processor, the initial fragmentation parameters and the batch parameters are calculated by using a genetic algorithm aiming at the rate data, and a parameter value interval is generated according to the resource capacity threshold;
acquiring the size of the data fragments and the transmission batch value from the parameter value interval, acquiring a tool chain transmission sample through a data sampler, counting the transmission delay and the throughput of the transmission sample by adopting a performance calculator, and adjusting the size of the data fragments and the transmission batch value according to a preset performance index;
carrying out parameter verification on the adjusted parameters by adopting a transmission verifier, constructing a test data set by a data generator, recording the data backlog quantity by adopting a queue monitor for test data transmission, and judging the rationality of the parameters according to backlog conditions;
Acquiring node dependency data from the tool dependency graph, checking the correctness of the dependency through a dependency verifier, calculating a tool priority value by adopting a task priority generator aiming at verification of the passing dependency, and generating a tool execution sequence according to a topology sequencer;
Acquiring a scheduling sequence table from a tool execution sequence, setting a tool concurrency threshold through a concurrency controller, dividing and calculating resource quota by adopting a resource allocator for tool operation, and dynamically adjusting the tool concurrency quantity according to a resource monitor;
Aiming at the running state of the tool, an execution monitor is adopted to track the execution condition of the task, a performance collector is used for recording the processing capacity index of the tool, a load predictor is adopted to calculate the use trend of resources, and the resource configuration of the tool is updated according to the prediction result;
In the process of monitoring tool node resources, taking an order processing tool chain as an example, the occupancy rate of an order creation node central processing unit is 75%, the occupancy rate of a memory is 65%, the data processing rate is 1000 orders per second, the transmission rate is 1500 messages per second, the initial data fragment size is 256KB through iterative computation by a genetic algorithm, the number of single-batch transmission messages is 50, and the upper limit of the resource capacity threshold limiting fragment size is 1024KB;
in the performance calculation stage, a data sampler collects transmission samples every 10 seconds, the sampling result shows that the average transmission time delay is 180 milliseconds, the throughput is 800 orders per second, the transmission time delay is lower than 150 milliseconds compared with the preset performance index, the throughput is not lower than 1000 orders per second, the size of the fragments is adjusted to 128KB according to the difference, and the batch size is increased to 80 orders;
The parameter verification link, the transmission verifier generates a test data set containing 10000 orders of data, the transmission test is carried out under a high concurrence scene, the maximum backlog number of the queue monitoring display message queue is 500 and is lower than 1000 backlog threshold values, and the verification parameter setting is reasonable;
In the tool dependency analysis, an order processing tool chain comprises four nodes, namely order creation, inventory checking, payment processing and logistics delivery, a dependency verifier confirms that an order creation node depends on an inventory checking node, a payment processing node depends on the order creation node, a logistics delivery node depends on the payment processing node, a task priority generator calculates priority according to the dependency depth, and a topological ordering generation execution sequence is generated;
In the resource allocation process, a concurrency controller sets a concurrency threshold according to the processing capacity of the nodes, the concurrency degree of the order creation node is set to 4, the concurrency degree of the inventory inspection node is set to 6, the concurrency degree of the payment processing node is set to 8, the concurrency degree of the logistics delivery node is set to 4, and the resource allocator allocates computing resources according to the proportion of 4:6:8:4;
And in the execution monitoring stage, the performance collector records the processing indexes of each node, the average processing time length of the order creation node is 150 milliseconds, the processing success rate is 99.5 percent, the processing time length of the inventory inspection node is 100 milliseconds, the processing success rate is 99.8 percent, the load predictor predicts that the peak period processing capacity is improved by 30 percent based on the historical data, and the node resource quota is increased in advance.
While the application has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that the foregoing embodiments may be modified or equivalents may be substituted for some of the features thereof, and that the modifications or substitutions do not depart from the spirit and scope of the embodiments of the application.
It will be apparent to those skilled in the art from this disclosure that various other changes and modifications can be made which are within the scope of the application as defined in the appended claims.

Claims (10)

1.一种基于云计算的企业数字化工具链动态协作与部署方法,其特征在于,包括以下步骤:1. A cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method, characterized by comprising the following steps: S101、监测工具链中的版本变更情况,获取升级后新增的数据字段和废弃的接口信息,通过语义分析方法判断升级前后接口协议的差异性,根据所述差异性调整数据映射关系,得到调整内容,并生成数据映射关系变更清单;S101, monitoring version changes in the tool chain, obtaining newly added data fields and abandoned interface information after the upgrade, determining the difference between the interface protocols before and after the upgrade through a semantic analysis method, adjusting the data mapping relationship according to the difference, obtaining the adjustment content, and generating a data mapping relationship change list; S102、根据步骤S101中的数据映射关系变更清单,采用基于规则的推理方法,对接口协议和数据字段的变更内容进行分析,根据预先配置的接口协议适配规则,生成不同版本工具间的数据映射脚本;S102, according to the data mapping relationship change list in step S101, using a rule-based reasoning method, analyzing the changes in the interface protocol and data fields, and generating data mapping scripts between different versions of tools according to pre-configured interface protocol adaptation rules; S103、在工具版本升级导致步骤S102中的接口协议变更的情况下,则调整数据映射关系,在调整过程中,将新增字段与已有字段进行相似性分析,推断新增字段与已有字段的关联性,将所述新增字段纳入数据字段匹配器,构建数据映射关系,并删除废弃接口;S103. When the tool version upgrade causes the interface protocol in step S102 to be changed, the data mapping relationship is adjusted. During the adjustment process, a similarity analysis is performed on the newly added fields and the existing fields, the relevance between the newly added fields and the existing fields is inferred, the newly added fields are included in the data field matcher, the data mapping relationship is constructed, and the abandoned interface is deleted; S104、在步骤S103中的删除废弃接口后,采用基于图的最短路径搜索算法,绕过已删除的废弃接口,重新规划工具间的数据传递路径,重新形成工具链数据传递路径;S104, after deleting the abandoned interface in step S103, using a graph-based shortest path search algorithm to bypass the deleted abandoned interface, re-plan the data transfer path between tools, and re-form the tool chain data transfer path; S105、收集步骤S104中的重新形成的工具链数据传递路径中工具节点的数据处理时延和资源消耗,分析接口协议变更前后的平均处理时延和吞吐量,判断是否存在网络延迟和宽带限制,若存在,则获取数据处理过程中的数据分片大小和传输批次参数,若不存在,则不获取数据处理过程中的数据分片大小和传输批次参数;S105. Collect the data processing delay and resource consumption of the tool nodes in the re-formed tool chain data transmission path in step S104, analyze the average processing delay and throughput before and after the interface protocol change, and determine whether there is network delay and bandwidth limitation. If so, obtain the data slicing size and transmission batch parameters in the data processing process; if not, do not obtain the data slicing size and transmission batch parameters in the data processing process; S106、根据步骤S105中的调整数据分片大小和传输批次参数,优化数据在工具链中的流转效率,根据优化后的数据流转方案,动态调整工具链中各工具的执行顺序和并发度。S106. According to the adjustment of the data shard size and the transmission batch parameters in step S105, the data flow efficiency in the tool chain is optimized, and according to the optimized data flow plan, the execution order and concurrency of each tool in the tool chain are dynamically adjusted. 2.根据权利要求1所述一种基于云计算的企业数字化工具链动态协作与部署方法,其特征在于,在步骤S101中,包括:2. According to the cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method of claim 1, it is characterized in that in step S101, it includes: 根据补丁包序列号从版本管理仓库中提取字段变更集合,并建立字段变更索引表,通过正则匹配识别出索引表中新增字段的名称与类型,并利用特征向量计算器获取语义特征值,选择相似度大于阈值的字段对,并通过数据类型校验器判断所述字段的类型兼容性,基于所述类型兼容性使用数据转换补偿器生成数据迁移规则,并通过规则验证器进行数据一致性验证,以标记映射规则的可靠等级,生成带可靠等级标记的映射关系清单;Extract the field change set from the version management warehouse according to the patch package serial number, and establish a field change index table. Identify the name and type of the newly added field in the index table through regular matching, and use the feature vector calculator to obtain the semantic feature value, select the field pair with a similarity greater than the threshold, and use the data type verifier to determine the type compatibility of the field. Based on the type compatibility, use the data conversion compensator to generate data migration rules, and use the rule validator to verify data consistency, so as to mark the reliability level of the mapping rules and generate a mapping relationship list with reliability level marks; 对于带可靠等级标记的映射关系清单中可靠等级低于阈值的映射规则,进行复核,并根据复核结果更新映射规则的可靠等级,生成数据映射变更清单。For the mapping rules whose reliability level is lower than the threshold in the mapping relationship list with reliability level mark, a review is conducted, and the reliability level of the mapping rules is updated according to the review result to generate a data mapping change list. 3.根据权利要求1所述一种基于云计算的企业数字化工具链动态协作与部署方法,其特征在于,在步骤S102中,包括:3. According to the cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method of claim 1, it is characterized in that in step S102, it includes: 从数据映射关系变更清单中使用规则解析器读取接口变更规则集合,并通过规则解释器解析字段映射关系以构建规则语法树;Use the rule parser to read the interface change rule set from the data mapping relationship change list, and parse the field mapping relationship through the rule interpreter to build a rule syntax tree; 基于所述规则语法树生成初始规则关系图,并通过完整性检查器验证节点连接的完整性;Generate an initial rule relationship graph based on the rule syntax tree, and verify the integrity of node connections through an integrity checker; 针对完整性检查器的验证结果,从规则模板库中选择匹配规则以补全规则节点间的映射关系,得到补全后的规则关系图,利用映射模板生成器将所述补全后的规则关系图转换为数据处理指令序列,所述数据处理指令序列通过数据样本测试器验证数据转换的正确性,得到数据映射脚本。Based on the verification results of the integrity checker, matching rules are selected from the rule template library to complete the mapping relationship between rule nodes, and a completed rule relationship graph is obtained. The completed rule relationship graph is converted into a data processing instruction sequence using a mapping template generator. The data processing instruction sequence is verified for the correctness of data conversion by a data sample tester to obtain a data mapping script. 4.根据权利要求1所述一种基于云计算的企业数字化工具链动态协作与部署方法,其特征在于,在步骤S103中,包括:通过协议变更解析器读取新旧版本接口定义文件,提取字段名称与类型,并利用文本向量化工具生成字段特征向量,计算新增字段与已有字段间的余弦相似度;4. According to the cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method of claim 1, it is characterized in that in step S103, it includes: reading the new and old version interface definition files through the protocol change parser, extracting the field name and type, and using the text vectorization tool to generate the field feature vector, and calculating the cosine similarity between the newly added field and the existing field; 若相似度超过阈值,则生成字段映射规则集合;If the similarity exceeds the threshold, a set of field mapping rules is generated; 若相似度未超过阈值,则不生成字段映射规则集合;If the similarity does not exceed the threshold, no field mapping rule set is generated; 使用数据采样器获取新增字段数据样本,并计算数据分布特征向量以判断字段映射规则的准确性,通过调用链路分析器识别废弃接口的依赖关系,并利用拓扑排序算法生成清理顺序表,通过调用验证器确认清理接口的停用状态。Use the data sampler to obtain data samples of the newly added fields, and calculate the data distribution feature vector to determine the accuracy of the field mapping rules. Identify the dependencies of the abandoned interfaces by calling the link analyzer, and use the topological sorting algorithm to generate a cleanup order table. Confirm the deactivated status of the cleanup interface by calling the validator. 5.根据权利要求1所述一种基于云计算的企业数字化工具链动态协作与部署方法,其特征在于,在步骤S104中,包括:5. According to the cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method of claim 1, it is characterized in that in step S104, it includes: 通过节点探测器向工具链路中的节点发送探测请求,根据所述节点的响应时间与预设的响应时间阈值比对,获取工具链路的拓扑结构;Sending a detection request to a node in the tool chain through a node detector, and obtaining a topological structure of the tool chain according to a comparison between a response time of the node and a preset response time threshold; 利用连通性检测器计算所述节点间的数据流转路径,并针对数据流转路径的传输速率与处理速率确定加权最短路径;Calculating the data flow path between the nodes using a connectivity detector, and determining a weighted shortest path based on the transmission rate and processing rate of the data flow path; 路径评分器根据所述加权最短路径计算传输时延、丢包率和负载均衡度的得分,并依据所述得分的权重判断出最优传输路径;The path scorer calculates the scores of transmission delay, packet loss rate and load balancing according to the weighted shortest path, and determines the optimal transmission path according to the weights of the scores; 配置下发器根据所述最优传输路径更新节点的路由配置表,同时性能采集器收集传输性能指标,若检测到存在性能指标异常,则切换至备选路径。The configuration sender updates the routing configuration table of the node according to the optimal transmission path, and the performance collector collects transmission performance indicators. If an abnormal performance indicator is detected, it switches to an alternative path. 6.根据权利要求1所述一种基于云计算的企业数字化工具链动态协作与部署方法,其特征在于,在步骤S105中,包括:6. According to the cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method of claim 1, it is characterized in that in step S105, it includes: 从工具链节点使用性能采集器获取处理时延数据和资源消耗数据,利用数据归一化处理器,根据所述处理时延数据和所述资源消耗数据得到统一尺度的性能指标数据,所述性能指标数据包括处理时延指标和资源消耗指标;通过网络探测器发送探测包,获取网络传输时延数据;针对所述网络传输时延数据,使用数据分片计算器得到最优分片规模参数。Use a performance collector to obtain processing delay data and resource consumption data from the tool chain node, and use a data normalization processor to obtain performance indicator data of a unified scale based on the processing delay data and the resource consumption data, and the performance indicator data includes a processing delay indicator and a resource consumption indicator; send a detection packet through a network detector to obtain network transmission delay data; use a data sharding calculator to obtain the optimal sharding scale parameter for the network transmission delay data. 7.根据权利要求1所述一种基于云计算的企业数字化工具链动态协作与部署方法,其特征在于,在步骤S106中,包括:获取节点资源监控数据,并使用资源负载计算器统计节点处理器和内存的占用率,得到资源负载数值。7. According to a cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method as described in claim 1, it is characterized in that in step S106, it includes: obtaining node resource monitoring data, and using a resource load calculator to count the occupancy rate of the node processor and memory to obtain a resource load value. 8.根据权利要求7所述一种基于云计算的企业数字化工具链动态协作与部署方法,其特征在于,数据处理器根据所述资源负载数值计算节点的处理速率和传输速率,并通过数据采样器获取对应的工具链的传输样本。8. According to the cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method of claim 7, it is characterized in that the data processor calculates the processing rate and transmission rate of the node according to the resource load value, and obtains the corresponding tool chain transmission sample through the data sampler. 9.根据权利要求8所述一种基于云计算的企业数字化工具链动态协作与部署方法,其特征在于,利用所述传输样本生成测试数据集合,并记录数据积压数量,得到积压状态数据;9. A cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method according to claim 8, characterized in that the transmission sample is used to generate a test data set, and the amount of data backlog is recorded to obtain backlog status data; 依赖验证器根据所述积压状态数据检查工具节点的依赖关系,并计算优先级数值,得到工具执行序列。The dependency verifier checks the dependency relationship of the tool nodes according to the backlog status data, and calculates the priority value to obtain the tool execution sequence. 10.根据权利要求9所述一种基于云计算的企业数字化工具链动态协作与部署方法,其特征在于,所述工具执行序列中获取调度顺序表,资源分配器根据此表划分计算资源配额,同时资源监控器动态调整工具的并发数量。10. According to claim 9, a cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method is characterized in that a scheduling order table is obtained in the tool execution sequence, and the resource allocator divides the computing resource quota according to this table, and the resource monitor dynamically adjusts the concurrent number of tools.
CN202510157478.3A 2025-02-13 2025-02-13 Cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method Pending CN119621290A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510157478.3A CN119621290A (en) 2025-02-13 2025-02-13 Cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510157478.3A CN119621290A (en) 2025-02-13 2025-02-13 Cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method

Publications (1)

Publication Number Publication Date
CN119621290A true CN119621290A (en) 2025-03-14

Family

ID=94906675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510157478.3A Pending CN119621290A (en) 2025-02-13 2025-02-13 Cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method

Country Status (1)

Country Link
CN (1) CN119621290A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160019049A1 (en) * 2010-05-26 2016-01-21 Automation Anywhere, Inc. System and method for resilient automation upgrade
CN116149728A (en) * 2023-02-16 2023-05-23 京东科技信息技术有限公司 CI/CD assembly conversion method and device
CN117492951A (en) * 2023-11-01 2024-02-02 浪潮软件股份有限公司 A standardized management method for the external interface of government affairs systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160019049A1 (en) * 2010-05-26 2016-01-21 Automation Anywhere, Inc. System and method for resilient automation upgrade
CN116149728A (en) * 2023-02-16 2023-05-23 京东科技信息技术有限公司 CI/CD assembly conversion method and device
CN117492951A (en) * 2023-11-01 2024-02-02 浪潮软件股份有限公司 A standardized management method for the external interface of government affairs systems

Similar Documents

Publication Publication Date Title
CN111274095B (en) Log data processing method, device, equipment and computer readable storage medium
US7818150B2 (en) Method for building enterprise scalability models from load test and trace test data
CN109961204A (en) A business quality analysis method and system under a microservice architecture
US20220086075A1 (en) Collecting route-based traffic metrics in a service-oriented system
US20170109636A1 (en) Crowd-Based Model for Identifying Executions of a Business Process
CN102497435A (en) Data distributing method and device of data service
WO2022142013A1 (en) Artificial intelligence-based ab testing method and apparatus, computer device and medium
CN117667585B (en) A method and system for evaluating operation and maintenance efficiency based on operation and maintenance quality management database
CN112506771A (en) Message comparison method and device
CN117876059A (en) Method, device, equipment and medium for online order management
CN113360353A (en) Test server and cloud platform
CN118761745B (en) OA collaborative workflow optimization method applied to enterprise
US20060025981A1 (en) Automatic configuration of transaction-based performance models
CN119621290A (en) Cloud computing-based enterprise digital tool chain dynamic collaboration and deployment method
CN118551991A (en) A work order management method, system, device and medium
RU2532714C2 (en) Method of acquiring data when evaluating network resources and apparatus therefor
CN116882724B (en) Method, device, equipment and medium for generating business process optimization scheme
CN117474613A (en) Substation work ticket intelligent billing data interaction management system based on artificial intelligence
CN113051479B (en) File processing and recommendation information generation methods, devices, equipment and storage medium
CN117453493B (en) GPU computing power cluster monitoring method and system for large-scale multi-data center
CN113010491A (en) Cloud-based data management method and system
CN112801688A (en) Method and device for positioning reason of estimation failure
CN117808602B (en) Hot account billing method and related device based on sub-account expansion
CN111159988A (en) Model processing method and device, computer equipment and storage medium
CN111240652A (en) Data processing method and device, computer storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination