Detailed Description
As shown in FIG. 1, the embodiment relates to a container mirror image generating system based on a non-proxy service grid framework, which comprises a multidimensional rule configuration module, a full-link gray scale isolation module, a service strategy injection module, a byte enhancement module, a full-link transparent transmission realization module and a mirror image generating module.
The multidimensional rule configuration module comprises a rule analysis unit, a flow distribution unit, a resource weight calculation unit and a configuration generation unit, wherein the rule analysis unit analyzes key rule information according to a multi-activity architecture model, a flow scheduling rule and domain name configuration parameters provided by a user, extracts configuration items and flow distribution logic of a multi-activity space to obtain basic rule data for calculation, the flow distribution unit calculates flow distribution according to the flow distribution logic and multi-activity space definition provided by the rule analysis unit in combination with real-time flow monitoring data to obtain distribution proportion and path planning results of flows in different multi-activity units, the resource weight calculation unit calculates and dynamically adjusts resource weight of each instance according to the flow proportion and resource availability index of the instance calculated by the flow distribution unit to obtain resource distribution results among multiple tenants, and the configuration generation unit dynamically generates a configuration file according to the weight data and the flow distribution path planning results generated by the resource weight calculation unit to obtain a multidimensional rule configuration file for service grid scheduling for calling and deployment of a subsequent module.
The flow distribution refers to that an initial weight W i is distributed to each data center DC i. Periodically, the health status of each data center, H i:Hi =1, is obtained through the monitoring system to indicate normal, H i =0, indicates failure (flow is no longer distributed to the node), and then the weight is dynamically adjusted according to the following indexes, namely response time, R i, load ratio, L i (current load/maximum bearable load), and dynamic weightWherein W 'i represents the adjusted weight, L max represents the maximum load of a single data center, and the total flow T is distributed to DC i according to the proportion of W' i by the following amount: The requests are distributed to the data centers one by one in a weighted polling mode, the weight proportion is followed, finally, H i is checked regularly, DC i is removed from the polling queue when H i =0, and the weights are re-added and initialized when H i =1.
The resource weight calculation is to update the weight according to real-time load feedback (such as current CPU utilization C i): Alpha is a smoothing factor for controlling the fusion proportion of new and old load information. On the basis, a depth deterministic strategy gradient algorithm is integrated, a depth learning algorithm is embedded into the system, the flow weight can be adaptively adjusted through a simple feedback formula under a dynamic environment, long-term benefits are learned, the future performance influence is included, and the local load unbalance or the system bottleneck problem is reduced.
The depth deterministic strategy gradient algorithm comprises defining a state spaceWherein C i represents CPU utilization of the ith server, T i represents average response time of the ith server, Q i represents current queue length of the ith server, action is defined, namely, the adjustment ratio space A t={ΔW1,ΔW2,…,ΔWn of weight of each server is defined, and rewarding function is defined, namely, efficiency of flow distribution is required to be measuredWhere Var is the variance of the flow distribution, the smaller the more balanced. Lambda 1,λ2 is a super parameter for trade-off balance and response time.
The training process of the depth deterministic strategy gradient algorithm is as follows, action A t (i.e. weight adjustment) is generated by the Actor network, and the Critic network is used for evaluating the value Q of the state-action pair (S t,At). Updating the Critic network according to the time differential (TemporalDifference, TD) objective: y t=Rt+γQ(St+1,At+1),Updating an Actor network through a strategy gradient method: the overall dynamic weight adjustment algorithm is changed into 1) initializing the Actor and Critic networks and parameters thereof, 2) storing state transition (S t,At,Rt,St+1) by using experience playback (ReplayBuffer), 3) randomly sampling the small batch data training network, and 4) updating the target network every certain step number.
The full-link gray scale isolation module comprises a multi-tenant isolation unit, a flow rule importing unit, an instance flow distribution unit and a version management unit, wherein the multi-tenant isolation unit performs inter-tenant resource and service isolation processing according to service attributes, resource use requirements and isolation strategies of tenants to obtain a multi-tenant isolation scheme capable of ensuring that different tenants are completely independent of data and flow, the flow rule importing unit performs rule analysis and importing processing according to flow distribution rules (such as user attributes, geographic positions and service characteristics) configured by users to obtain flow scheduling rules applicable to different tenants and service scenes, the instance flow distribution unit performs flow distribution calculation and path planning processing according to results of the flow rule importing unit and real-time states of all instances in a system through adhesion filters, label filters and load balancing filters based on responsibility chains, the version management unit operates micro services of different versions in independent lanes according to service logic requirements and system upgrading strategies to perform version isolation and dynamic management to obtain a flexible upgrade scheme capable of supporting blue-green deployment, golden-gold release and other scenes.
The multi-tenant isolation refers to realizing independent partitioning of computing, storing and network resources through a virtualization technology, and combining multi-tenant identification separation and access authority control to ensure that resources among different users do not interfere with each other. Meanwhile, through Virtual Private Cloud (VPC), encryption transmission, current limiting strategy and refined authority management, data protection and service stability are enhanced, resource contention and data leakage are prevented, and safety and high-performance operation in a multi-tenant environment are ensured.
The sticky filter based on the responsibility chain adopts hash allocation based on session identification to ensure that the request of the same user is always routed to a fixed Instance so as to maintain the consistency of the session, specifically, instance= Hash (SessionID) mod N, wherein Instance represents a target Instance, session id represents the session identification (such as Cookie, token, etc.) requested by the user, and N represents the number of back-end instances.
The label filter adopts a multidimensional matching priority routing strategy to route the traffic to an instance set with highest priority, and the method specifically comprises the following steps: wherein w i represents the weight, i.e. the priority of each Tag, tag i represents the i-th Tag carried by the request, rule i represents the matched Tag Rule, and Match is a Boolean function and returns 1 (matched) or 0 (unmatched).
The load balancing filter realizes the optimization of flow distribution through weighted polling, and specifically comprises the following steps: Wherein P (i) represents the probability of allocation for instance i and W i represents the weight of instance.
Version management means that efficient iteration and reliable deployment of versions are achieved through a centralized and automatic combination mode. The system supports a multi-version coexistence and gray level release mechanism, and ensures the stability and performance of the new version to be verified gradually under the condition of not affecting the existing service. Through integrating continuous integration/continuous delivery (CI/CD) tool chains, version management covers links such as code compiling, function testing, environment adaptation, automatic deployment and the like, and combines a version rollback mechanism, so that controllability and safety of an upgrading process are ensured, and high-availability requirements are met.
The service policy injection module comprises a policy adaptation unit, a control plane unit, a registry synchronization unit and a byte enhancement unit, wherein the policy adaptation unit performs policy analysis and configuration processing according to rule information (comprising routing policies, authorization policies, retry policies, disconnection policies and the like) input by a user to obtain a dynamic policy configuration scheme of an adaptive main flow service framework, the control plane unit performs elasticity enhancement, event bus configuration, synchronization policy implementation, log recording and system telemetry data opening according to system running states and service requirements to obtain a service control plane with real-time monitoring, elasticity expansion and event driving capability, the registry synchronization unit ensures service state consistency of multiple centers or multiple areas by synchronizing service information of each registry in real time, provides technical support for cross-area load balancing and disaster recovery backup, synchronizes the health state, up-down information and metadata change of service instances in real time, ensures cross-area load balancing to be based on the latest service state decision, and the byte enhancement unit performs enhancement processing on target application according to service policies and dynamic loading requirements to obtain high-performance service instances of the injected logic policies.
The event bus configuration is that a real-time event is received through a data input layer, unified message body analysis rules are configured to generate unified context objects, a data processing layer is used for carrying out data analysis, filtering, enrichment and conversion through a filtering operator and an enrichment operator, processed data can be distributed to different message queues or storage systems through a data output layer, a custom script is configured while rapid support of a new data source or target is achieved through a connector mechanism, operator logic is dynamically adjusted, and frequent restarting service is not needed.
The filtering operator screens the input data through a predefined rule, reserves the data meeting the condition, and discards the data not meeting the condition. The core is a Boolean decision based on conditions, Y= { X ε X|P (X) = true }, where X is a set of input data and P (X) is a predicate function, defining a filter condition. Y represents an output data set containing only data satisfying P (x) =true. The enrichment operator supplements or enhances the input data by introducing an external data source or calculation logic, and provides more context information for downstream processing, specifically, Y= { (X, E (X))|x ε X } where X is a set of input data, E (X) represents an enrichment function, how to supplement or enhance the data X is defined, Y represents an output data set, and the enriched data is obtained. In an event bus, a filtering operator and an enriching operator usually work cooperatively in a form of a responsibility chain, namely, firstly, invalid or irrelevant events are filtered through the filtering operator, the data processing pressure is reduced, the context of a reserved data set is further enhanced by the enriching operator, and finally, the responsibility chain is output.
The implementation of the synchronization strategy specifically comprises the following steps:
i) Dynamic grouping and route planning, namely mapping a service logic grouping G to a specific resource group (such as a server or a theme queue), specifically, G=f (U, T), wherein G represents grouping, U represents available resource units such as a server, a service node and the like, and T represents tasks or themes.
For high-load scenarios, the priority-based demotion strategy ensures that critical events are handled preferentially, specifically a priority function P (x) =α 1·W(x)+α2 ·r (x), where P (x) represents priority, W (x) represents weight, R (x) describes resource occupancy ratio, and α 1,α2 represents an adjustable coefficient.
Ii) filtering, parsing and distributing messages layer by layer through the responsibility chain mode to achieve real-time monitoring and link optimization. The data processing of each step is realized by an operator function, Y i+1=fi(Xi, wherein X i is input data, f i is operator logic, and Y i+1 is processed output data.
Iii) Using a bi-directional or multi-directional communication protocol (e.g., gRPC, MQTT), data consistency between different systems is ensured, in particular a synchronization window W s=max(Td-Tr, 0, where W s represents a synchronization delay window, T d represents a data generation time, and T r represents a data reception time.
The system comprises a working log record, a system telemetry data, a log record and a system telemetry data, wherein the working log record and the system telemetry data are opened through an open standard, a plurality of monitoring protocols (such as OpenTelemetry, prometheus) are supported, full-link tracking, performance index acquisition and real-time analysis are provided, the log record adopts a layered design, multi-dimensional information such as key events, errors, flow, resource utilization rate and the like is covered, the log level and output format are supported to be dynamically configured, different debugging requirements are met, and efficient log storage, query and alarm are realized through seamless integration with an external log system (such as ELK and Fluentd), and the system performance is optimized.
The byte enhancement module comprises a code interception unit, a modification point positioning unit, an instruction set operation unit and a code reloading unit, wherein the code interception unit reads and preprocesses a target code file according to binary files or intermediate representation file information when a program is loaded to obtain intermediate representation of the program for analysis, the modification point positioning unit selects specific code fragments (such as function entries or specific calls) which need to be enhanced according to the code information provided by the interception unit and combines enhancement requirements, and judges modification points meeting enhancement conditions to obtain a target modification point list, the instruction set operation unit performs instruction insertion, replacement or deletion processing according to the target modification point list, and injects required byte code logic (comprising entry enhancement, exit enhancement and behavior reloading) to obtain an enhanced instruction set, and the code reloading unit performs reloading processing according to the modified instruction set to cover the enhanced code into an operating environment to obtain enhanced high-performance program logic.
The registry synchronization comprises policy service execution, service controller management and context synchronization service coordination based on strong consistency, and adopts a synchronization mechanism based on log replication in combination with strong consistency synchronization based on vector clocks, specifically, L i={l1,l2,..,ln, the registry state is determined by a log sequence, S i=f(Li), and the log synchronization rule is L follower=Lleader,whereLleader=max(Lfollower), wherein f represents a state generating function. The actual service pressure measurement finds that the pressure of the main node is large, single-point faults are easy to occur, the synchronization time delay is prolonged when strong consistency is needed, and the consistency cost is high.
In the strong consistency synchronization, each node maintains a vector VC i, representing the timestamp of the global event observed by node i.Wherein the number of elements of the vector clock VC is equal to the number of nodes, and each element VC [ i ] is the local time of the node i. Each registry R i initializes the vector clock VC i with element 0, and when a service instance registers or de-registers, the registry R i adds its local vector clock VC i[i]←VCi i+1, where service state changes are represented by event E i, including vector clock VC i and change content. The synchronization process is as follows, registry R i sends event E i to other registries R j, receiver R j merges vector clocks: and judging whether to apply the change according to the event time stamp. If multiple events conflict, vector clock ordering rules are used: It is ensured that events are handled in the same order on all nodes, eliminating inconsistent states.
The health status of the synchronous service instance is that periodic health check is based on Ping, HTTP status code and business level heartbeat detection, the calculation of the health status updates status data in real time through a sliding window, and judgment due to old data interference is avoided ifThe health status of the ith instance at time t is 1. However, in practical application, the real-time performance of the sliding window method is found to be stronger, but potential problems can not be predicted in advance.
The health state is preferably predicted by a time sequence prediction algorithm, specifically, the method comprises the following steps of predicting an index with strong trend and periodicity by using an ARIMA model (autoregressive integral sliding average): Where p represents the order of the Autoregressive (AR) portion, q represents the order of the Moving Average (MA) portion, and ε t represents white noise. Firstly, checking the stability of the sequence (such as unit root test), carrying out differential processing on the non-stationary sequence, and selecting optimal P, q values by using AIC/BIC criteria. Fitting an ARIMA model, and calculating parameters phi, theta and c. Thereby predicting the future timing point X t+h. LSTM models (long-term short-term memory networks) can also be incorporated in the face of high-dimensional, nonlinear, complex health state data.
The long-term and short-term memory network learns the long-term dependency relationship of time sequence, namely forgetting gate, namely f t=σ(Wf·[ht-1,xt]+bf, input gate, namely i t=σ(Wi·[ht-1,xt]+bi through a cyclic neural network (RNN) architecture), Cell status update: Output gate o t=σ(Wo·[ht-1,xt]+bo),ht=ot⊙tanh(Ct), where x t is the current input, h t-1 is the hidden state at the last time, and C t is the current cell state. The health state index X t is organized into sequence data of a sliding window, an LSTM network is constructed, a sequence { X t-n,Xt-n+1,...,Xt } is input, and a prediction X t+1 is output, so that future health states can be predicted in a rolling mode.
The time sequence prediction algorithm is specifically realized by collecting historical health state indexes of service, such as response time, CPU (Central processing Unit) use rate, error rate and the like, and performing data preprocessing, namely stabilizing a time sequence and removing noise and trend. And (3) performing stability verification and determining optimal p, d and q parameters through AIC or BIC. Capturing linear trend by ARIMA under a distributed scene to obtain a residual sequence, performing nonlinear modeling on the residual sequence by using LSTM, converting the residual sequence into a time step format, receiving sequence data, and constructing multi-layer LSTM once capturing time sequence dependence. And obtaining a superposition result of the two: The parallel model inputs a time sequence to the ARIMA model and the LSTM model simultaneously, and the prediction results of the ARIMA model and the LSTM model are fused through weights: The method combines the linear prediction capability of ARIMA and the nonlinear modeling capability of LSTM, adds an Attention mechanism in LSTM, dynamically focuses on key time points, continuously updates a prediction model by utilizing a sliding window technology, adapts to the dynamic change of the health state, and realizes accurate prediction and early warning of the service health state so as to improve the robustness of the health state prediction.
The class management service on-demand loading refers to optimizing class loading flow, reducing system resource occupation and improving service starting efficiency and flexibility in operation through a delayed loading and dynamic loading technology. Including class full lifecycle management and class instantiation load optimization. The invention dynamically loads the class by using the self-defined class loader, and realizes the fine management of the class from creation, loading, initialization, use and unloading. By isolating class loading environments of different modules, the problem of class loading conflict is avoided. In connection with factory mode or proxy mode, the instantiation of classes is delayed, ensuring that relevant logic is loaded only at the actual invocation. And the on-demand loading mechanism is combined with the plug-in design of the invention, so that the plug-in module can be dynamically loaded and unloaded during operation.
The policy service execution specifically means that the cross-regional service treatment with strong consistency is realized through the steps of dynamic rule loading, real-time state sensing, policy execution and synchronization, multi-region consistency guarantee and the like. Firstly, loading policy rules and monitoring rule changes in a service starting stage, analyzing the policy rules into a unified policy model, and storing the unified policy model into a memory. And secondly, the registration center senses the health state, the online and offline information and the metadata change of the service instance in real time, and dynamically updates local decisions, including weight adjustment, current limiting threshold setting and route optimization. And broadcasting the strategy execution result to all nodes through an event bus or a registry, combining version control to ensure the consistency of the synchronous result, and if conflict is detected in the synchronization, utilizing a master-slave mechanism or a voting mode to solve the conflict, and rolling back to the previous version if necessary. And finally, optimizing the multi-region synchronization efficiency through incremental synchronization, batch processing and a transaction mechanism, simultaneously maintaining state consistency, supporting multi-protocol adaptation and ensuring that strategy synchronization in a heterogeneous system is correct.
The management of the service controller is to ensure that the service call under the multi-activity architecture is efficient, stable and reliable through fine-granularity control and strategic management, and the functions of service routing, load balancing, current limiting, degradation and the like are included, so that the service quality guarantee and the efficient resource utilization are realized. The service controller distributes the request to the service instance meeting the condition through the dynamic routing rule, perceives the service state in real time to adjust the route, dynamically updates the load balancing algorithm such as combining polling, weighted random or minimum connection number by using the weight to optimize the resource distribution, sets the flow limiting rule through the token bucket algorithm, limits the request rate according to the QPS or TPS, configures the emergency strategy to handle the overrun condition, and executes the degradation strategy based on the triggering condition when the service is not available, such as returning the default response or calling the standby service.
The context synchronization service coordination is used for ensuring the consistency of the context information of the cross-node service in the distributed multi-activity architecture. By this mechanism, services of different nodes can share critical context data, thereby enabling global visibility and consistency handling of the request links. Context information propagates between nodes through efficient serialization and network transport mechanisms. And through a synchronous protocol, the consistency of state data among nodes is ensured, a final consistency and strong consistency model is supported, and the specific selection depends on the requirements of service scenes.
In the strong coherency scenario, a distributed coherency protocol such as Paxos or Raft is commonly used. The nodes log the context changes, ensuring that multiple groups of nodes receive the same change log L: L commit={L1,L2,…,Ln},ifMajority(Li) =true, i.e., if log L i is acknowledged by multiple nodes, the changes are committed. And using the heartbeat mechanism H, ensures that the leader is present: Compression of the context data using Huffman coding or LZ algorithms, C compressed=fcompress (C), where f compress is a compression function, can significantly reduce the amount of data transmitted. This provides a strict consistency guarantee, applicable to critical context synchronization across data centers.
The full-link transparent transmission realization module comprises a message interception unit, a transparent transmission message definition unit, a data aggregation encapsulation unit and a link closed-loop unit, wherein the message interception unit performs request interception and metadata extraction processing according to interaction request information between a service consumer and a provider to obtain original data of transparent transmission messages including request identifications, context information and the like, the transparent transmission message definition unit performs analysis and standardization processing of a message format according to a predefined extended transparent transmission message structure to obtain standardized transparent transmission messages meeting the requirements of link tracking and flow management, the data aggregation encapsulation unit performs data aggregation and encapsulation processing according to the standardized transparent transmission messages to obtain transparent transmission data packets containing complete link information, and the link closed-loop unit performs flow closed-loop calculation and path optimization processing in a link according to the encapsulated transparent transmission data packets and combines a service unit and a lane rule to obtain an execution scheme of a closed-loop link.
As shown in fig. 2, the implementation device of the container mirror image generating system based on the agentless service grid framework in this embodiment includes an upper layer application, a container mirror image generating system and an external data layer.
The upper layer application function has rule input, strategy injection result feedback, multiple activity model visualization and service retrieval. The rule input can use Spring Boot to realize RESTful API and gRPC interface, receive JSON or YAML format rule, analyze rule content through ANTLR analyzer, dynamically enhance responsibility chain by means of Javassist or ByteBuddy, inject rule in real time, take effect without restarting, strategy injection feedback collect execution data by utilizing Prometaus, transfer result by adopting Kafka or RabbitMQ event bus, notify upper layer application through Webhook callback, multi-activity model visualization integration OpenTelemetry or custom probe collect node and flow data, use Redis to conduct data aggregation, push data update through WebSocket, and meanwhile dynamically render topology view through ECharts, service retrieval module Consul or Eureka synchronous service instance information, construct distributed index by using elastic search, and provide efficient multi-dimensional query and real-time update service.
The data layer operates with external data sources through a persistence layer interface, a transaction management framework (seata), and the like. The database is directly operated by using a standard JDBC (Java Database Connectivity) interface, or the database operation is simplified by using an ORM framework such as MyBatis, hibernate and the like, and the data integrity is ensured by combining database transaction and business logic. In processing data of document type, time series, key value, etc., it can cooperate with non-relational database by means of distributed buffer system or message queue (e.g. Kafka).
The service policy injection in the container mirror image generating system specifically comprises that a service instance sends a registration request to a registration center Nacos of a treatment policy module when being started, instance meta-information (service name, address, running state and the like) is provided, and a service discovery module dynamically analyzes the registration table according to the call request and returns a service instance list meeting the conditions. The configuration management function realizes dynamic configuration issuing through a subscription-release mode. After the service instance is started, relevant configuration update subjects are subscribed, and dynamic configuration data are combined with service discovery to influence the calling priority and routing behavior of the service. In combination with dynamic configuration, when the traffic exceeds a threshold or service performance decreases, the module automatically triggers a degradation and current limiting mechanism to ensure the stability of the system. The module periodically invokes the health check interface of the service instance to actively check the health status, and automatically marks the abnormal service instance by analyzing real-time data (such as error rate, delay, etc.) of the service invocation. In the overall service invocation chain, the modules synchronize the invocation contexts (e.g., trace ID, user tags), ensuring that the invocation information across services is consistent.
Through specific practical experiments, under the complex business scene of high concurrency multi-tenant, the device is started to operate by using the flow pressure of 1000TPS (transactions per second), 50 micro service instances and configuration parameters of 10 multi-living spaces, and the experimental data can be obtained, namely, the overall processing delay of the system is stabilized within 30 milliseconds, the global synchronization time after the adjustment of the flow scheduling rule is less than 1 second, the multi-tenant resource isolation accuracy reaches 99.9%, and the execution success rate of service strategy injection and dynamic enhancement is 98.7%. In the simulation experiment, the platform successfully realizes accurate flow distribution and complete link tracking, supports the display of a test window for blue-green deployment and canary release, and reduces the failure rate of version switching to 0.05%.
As shown in table 1, theoretical analysis shows that, through cooperative work among modules, such as a multidimensional rule configuration module, a full-link gray level isolation module, a service strategy injection module and a full-link transparent transmission implementation module, the platform can efficiently manage diversity in a complex system, and realize dynamic scheduling, real-time monitoring and elastic system expansion of resources in a high concurrency scene. The device can stably operate, has new functions of supporting rapid iteration, flexible expansion and performance optimization of the service, and provides comprehensive guarantee for multi-tenant architecture and modernized micro-service management of enterprises.
Table 1 comparison of technical characteristics
Compared with the prior art, in the aspect of flexibility, the multidimensional rule configuration module supports dynamic construction rules, is loaded and adjusted in real time based on a scene, and adopts a micro-kernel architecture to only reserve necessary modules, such as basic communication, plug-in management and the like, and other expansion functions are independently realized in the form of modules or plug-ins. The design reduces the complexity of the core and improves the stability and maintainability of the system. The module and the plug-in support loading or unloading according to the need, can flexibly adjust the functions according to the service requirement, provide flow strategies such as adhesion routing (ensuring that the same user request is always routed to the same instance) and label routing (determining a routing path according to a label field), and the full-link gray scale isolation module supports service isolation, multi-tenant environment and fine flow management. The byte code enhancement module allows for dynamic insertion of functions or repair problems at runtime, eliminating the hassle of restarting the service. The method is particularly suitable for scenes with high availability requirements, and service interruption is avoided. In the aspect of reliability, the service strategy injection module detects the health state of the service instance by introducing a mode of combining active detection and passive monitoring, wherein the method comprises the steps of resource utilization rate, network delay, response time and the like. When the health condition of the instance is poor, the flow can be switched to the healthy standby instance in real time, so that the continuity of the service is ensured. The design of multi-activity flow scheduling enables dynamic scheduling in different regions or available intervals to achieve fault isolation. And the distribution is adjusted according to the traffic pressure and the resource use condition through an intelligent routing strategy, so that single-point overload is avoided. And under the scene of higher policy consistency requirement, the event bus coordination and the strong synchronization guarantee ensure the synchronization reliability through a distributed transaction or a strong consistency algorithm. In the aspect of adaptability, flexible flow scheduling rules are provided, such as distribution according to priority, gray level release, user grouping and the like, so that the requirements of various service models are met, common load balancing algorithms (such as polling, minimum response time, hash routing and the like) are supported, and developers are allowed to customize the algorithms to adapt to special requirements. The routing rule can be dynamically adjusted without interrupting service, so that the service flexibility and instantaneity are ensured. The system is internally provided with support for common communication protocols, and allows new protocols to be expanded, so that the system is widely applicable to the existing service architecture. In the aspect of portability, all modules define functions through interfaces, so that the modules are allowed to be independently realized under different languages or frameworks, the cross-language compatibility is enhanced, the running environment is automatically adapted when the modules are loaded, and the unloaded modules cannot influence the performance and stability of core services. The framework is suitable for a complex micro-service system, can also play a role under a single framework, and is convenient for smooth transition. And the service discovery, automatic expansion and container arrangement capability of the Kubernetes are integrated in a native way, so that the containerized deployment efficiency is improved. The multiple clouds are compatible, so that migration among different cloud environments is facilitated, and manufacturer locking is avoided.
The foregoing embodiments may be partially modified in numerous ways by those skilled in the art without departing from the principles and spirit of the invention, the scope of which is defined in the claims and not by the foregoing embodiments, and all such implementations are within the scope of the invention.