[go: up one dir, main page]

CN120045285A - Container mirror image generating system based on agent-free service grid framework - Google Patents

Container mirror image generating system based on agent-free service grid framework Download PDF

Info

Publication number
CN120045285A
CN120045285A CN202510122047.3A CN202510122047A CN120045285A CN 120045285 A CN120045285 A CN 120045285A CN 202510122047 A CN202510122047 A CN 202510122047A CN 120045285 A CN120045285 A CN 120045285A
Authority
CN
China
Prior art keywords
service
unit
traffic
module
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510122047.3A
Other languages
Chinese (zh)
Inventor
胡畔
李逸凡
蔡鸿明
于晗
姜丽红
钱麒丹
陈诺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiao Tong University
Original Assignee
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiao Tong University filed Critical Shanghai Jiao Tong University
Priority to CN202510122047.3A priority Critical patent/CN120045285A/en
Publication of CN120045285A publication Critical patent/CN120045285A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种基于无代理服务网格框架的容器镜像生成系统,包括:多维规则配置模块、全链路灰度隔离模块、微服务策略注入模块、字节增强模块和链路透传实现模块,本发明通过基于运行时字节码增强技术,结合使用软件工具包(开发、编译、调试和运行应用程序所需的核心工具和库)和服务网格的优势,提供了无侵入性的治理逻辑注入、灵活的扩展性和高效的性能优化。不仅简化了微服务架构中的服务治理,还提升了系统的可观测性和安全性,对于现代微服务环境具有重要意义。

A container image generation system based on an agentless service grid framework includes: a multi-dimensional rule configuration module, a full-link grayscale isolation module, a microservice policy injection module, a byte enhancement module, and a link transparent transmission implementation module. The present invention provides non-invasive governance logic injection, flexible scalability, and efficient performance optimization by combining the advantages of software toolkits (core tools and libraries required for developing, compiling, debugging, and running applications) and service grids based on runtime bytecode enhancement technology. It not only simplifies service governance in microservice architectures, but also improves the observability and security of the system, which is of great significance to modern microservice environments.

Description

Container mirror image generating system based on agent-free service grid framework
Technical Field
The invention relates to a technology in the field of information processing, in particular to a container mirror image generation system based on a non-proxy service grid framework.
Background
In today's micro-service architecture, an application is typically split into multiple independent services that communicate over a network, which is advantageous in that it can increase the scalability and flexibility of the system. Initially developers used SDKs to address these issues, service governance was achieved by integrating various libraries and tools in the code. However, as the scale of micro-service architecture continues to expand, this approach has grown to become limiting, introducing reliance on specific tools (e.g., istio, linkerd), requiring partial adjustments to the service code to meet its requirements, and requiring the service to expose specific metadata or headers to achieve distributed tracking and monitoring, which has created code intrusion and consistency issues. To solve these problems, service grid technology has evolved to separate the logic of service governance from the application code by introducing a proxy layer between services, enabling better governance and management. However, the introduction of service grids also presents new challenges of inter-service communication, complexity of configuration, overall service paralysis that may be caused by local failure, real-time scheduling, lack of dynamic awareness, and dependence on specific platforms and frameworks.
Disclosure of Invention
The present invention addresses the above-described deficiencies of the prior art by providing a container image generation system based on a agentless service grid framework that provides for non-intrusive administration logic injection, flexible extensibility, and efficient performance optimization by combining the advantages of using software toolkits (core tools and libraries required to develop, compile, debug, and run applications) and service grids based on runtime bytecode enhancement techniques. Not only simplifying the service management in the micro-service architecture, but also improving the observability and the safety of the system, and having great significance for the modern micro-service environment.
The invention is realized by the following technical scheme:
The invention relates to a container mirror image generating system based on a non-proxy service grid framework, which comprises a multidimensional rule configuration module, a full-link gray scale isolation module, a micro-service strategy injection module, a byte enhancement module, a link transparent realization module and a mirror image generating module, wherein the multidimensional rule configuration module analyzes rule configuration parameters and completes configuration of multiple living spaces according to a multiple living architecture model and a flow dispatching rule in a user system development document, calculates flow distribution through a flow dispatching algorithm to generate a configuration file, the full-link gray scale isolation module calculates instance flow weight and dynamically generates resource isolation configuration according to multiple living flow dispatching strategies based on multi-tenant resources and service isolation requirements, the micro-service strategy injection module utilizes a dynamic loading mechanism to conduct embedded service discovery and link tracking on byte codes of application programs during operation, the byte enhancement module removes redundant logic and method internal optimization, the link transparent realization module intercepts and calculates flow closed-loop parameters in links based on an expanded transparent message structure, generates a complete link configuration file through encapsulation processing, and the mirror image generating module integrates output of each module and generates a deployable container mirror image file by using a container tool construction.
Technical effects
The invention is based on the container mirror image generator of the non-proxy service grid framework, and finally generates the container mirror image file which meets the requirement and can be deployed according to the system development requirement document and the multidimensional rule configuration parameter provided by the user. Compared with the prior art and implementation mode, the method and the device directly inject the service grid function into the application program through the operation of the byte code in the running process, so that the extra network overhead can be reduced, and the delay is reduced. No additional side car agents need to be run, so that occupation of CPU, memory and network resources is reduced. Meanwhile, operation and maintenance are simplified, a no-proxy mode is realized through the enhanced byte code operation, configuration and management related to all service grids can be concentrated on a control plane, and a side car proxy is not required to be configured in each service instance independently, so that environmental complexity is reduced. Because the byte code enhancement technology is used as a service management data surface, the local upgrading of the data surface can be realized by decoupling natural and application programs, and when version upgrading iteration is faced, unified management and control are realized without depending on repackaging construction of user application. By performing byte code operation on the application program in the running process, logic related to the service grid is directly inserted in the byte code layer without modifying the source code of the application program by a developer, so that the application program is compatible with the existing ecological system, and the service grid function can be flexibly added or removed to adapt to different running process requirements. And for legacy systems which cannot modify source code, the method is also an ideal mode, and can integrate modern service grid functions into an old system, thereby improving functions and performances of the old system. The invention can enjoy the powerful functions brought by the service grid while maintaining the stability of the existing system, and is an efficient and flexible solution.
Drawings
FIG. 1 is a schematic diagram of a system according to the present invention;
Fig. 2 is a schematic diagram of an implementation device of the embodiment.
Detailed Description
As shown in FIG. 1, the embodiment relates to a container mirror image generating system based on a non-proxy service grid framework, which comprises a multidimensional rule configuration module, a full-link gray scale isolation module, a service strategy injection module, a byte enhancement module, a full-link transparent transmission realization module and a mirror image generating module.
The multidimensional rule configuration module comprises a rule analysis unit, a flow distribution unit, a resource weight calculation unit and a configuration generation unit, wherein the rule analysis unit analyzes key rule information according to a multi-activity architecture model, a flow scheduling rule and domain name configuration parameters provided by a user, extracts configuration items and flow distribution logic of a multi-activity space to obtain basic rule data for calculation, the flow distribution unit calculates flow distribution according to the flow distribution logic and multi-activity space definition provided by the rule analysis unit in combination with real-time flow monitoring data to obtain distribution proportion and path planning results of flows in different multi-activity units, the resource weight calculation unit calculates and dynamically adjusts resource weight of each instance according to the flow proportion and resource availability index of the instance calculated by the flow distribution unit to obtain resource distribution results among multiple tenants, and the configuration generation unit dynamically generates a configuration file according to the weight data and the flow distribution path planning results generated by the resource weight calculation unit to obtain a multidimensional rule configuration file for service grid scheduling for calling and deployment of a subsequent module.
The flow distribution refers to that an initial weight W i is distributed to each data center DC i. Periodically, the health status of each data center, H i:Hi =1, is obtained through the monitoring system to indicate normal, H i =0, indicates failure (flow is no longer distributed to the node), and then the weight is dynamically adjusted according to the following indexes, namely response time, R i, load ratio, L i (current load/maximum bearable load), and dynamic weightWherein W 'i represents the adjusted weight, L max represents the maximum load of a single data center, and the total flow T is distributed to DC i according to the proportion of W' i by the following amount: The requests are distributed to the data centers one by one in a weighted polling mode, the weight proportion is followed, finally, H i is checked regularly, DC i is removed from the polling queue when H i =0, and the weights are re-added and initialized when H i =1.
The resource weight calculation is to update the weight according to real-time load feedback (such as current CPU utilization C i): Alpha is a smoothing factor for controlling the fusion proportion of new and old load information. On the basis, a depth deterministic strategy gradient algorithm is integrated, a depth learning algorithm is embedded into the system, the flow weight can be adaptively adjusted through a simple feedback formula under a dynamic environment, long-term benefits are learned, the future performance influence is included, and the local load unbalance or the system bottleneck problem is reduced.
The depth deterministic strategy gradient algorithm comprises defining a state spaceWherein C i represents CPU utilization of the ith server, T i represents average response time of the ith server, Q i represents current queue length of the ith server, action is defined, namely, the adjustment ratio space A t={ΔW1,ΔW2,…,ΔWn of weight of each server is defined, and rewarding function is defined, namely, efficiency of flow distribution is required to be measuredWhere Var is the variance of the flow distribution, the smaller the more balanced. Lambda 12 is a super parameter for trade-off balance and response time.
The training process of the depth deterministic strategy gradient algorithm is as follows, action A t (i.e. weight adjustment) is generated by the Actor network, and the Critic network is used for evaluating the value Q of the state-action pair (S t,At). Updating the Critic network according to the time differential (TemporalDifference, TD) objective: y t=Rt+γQ(St+1,At+1),Updating an Actor network through a strategy gradient method: the overall dynamic weight adjustment algorithm is changed into 1) initializing the Actor and Critic networks and parameters thereof, 2) storing state transition (S t,At,Rt,St+1) by using experience playback (ReplayBuffer), 3) randomly sampling the small batch data training network, and 4) updating the target network every certain step number.
The full-link gray scale isolation module comprises a multi-tenant isolation unit, a flow rule importing unit, an instance flow distribution unit and a version management unit, wherein the multi-tenant isolation unit performs inter-tenant resource and service isolation processing according to service attributes, resource use requirements and isolation strategies of tenants to obtain a multi-tenant isolation scheme capable of ensuring that different tenants are completely independent of data and flow, the flow rule importing unit performs rule analysis and importing processing according to flow distribution rules (such as user attributes, geographic positions and service characteristics) configured by users to obtain flow scheduling rules applicable to different tenants and service scenes, the instance flow distribution unit performs flow distribution calculation and path planning processing according to results of the flow rule importing unit and real-time states of all instances in a system through adhesion filters, label filters and load balancing filters based on responsibility chains, the version management unit operates micro services of different versions in independent lanes according to service logic requirements and system upgrading strategies to perform version isolation and dynamic management to obtain a flexible upgrade scheme capable of supporting blue-green deployment, golden-gold release and other scenes.
The multi-tenant isolation refers to realizing independent partitioning of computing, storing and network resources through a virtualization technology, and combining multi-tenant identification separation and access authority control to ensure that resources among different users do not interfere with each other. Meanwhile, through Virtual Private Cloud (VPC), encryption transmission, current limiting strategy and refined authority management, data protection and service stability are enhanced, resource contention and data leakage are prevented, and safety and high-performance operation in a multi-tenant environment are ensured.
The sticky filter based on the responsibility chain adopts hash allocation based on session identification to ensure that the request of the same user is always routed to a fixed Instance so as to maintain the consistency of the session, specifically, instance= Hash (SessionID) mod N, wherein Instance represents a target Instance, session id represents the session identification (such as Cookie, token, etc.) requested by the user, and N represents the number of back-end instances.
The label filter adopts a multidimensional matching priority routing strategy to route the traffic to an instance set with highest priority, and the method specifically comprises the following steps: wherein w i represents the weight, i.e. the priority of each Tag, tag i represents the i-th Tag carried by the request, rule i represents the matched Tag Rule, and Match is a Boolean function and returns 1 (matched) or 0 (unmatched).
The load balancing filter realizes the optimization of flow distribution through weighted polling, and specifically comprises the following steps: Wherein P (i) represents the probability of allocation for instance i and W i represents the weight of instance.
Version management means that efficient iteration and reliable deployment of versions are achieved through a centralized and automatic combination mode. The system supports a multi-version coexistence and gray level release mechanism, and ensures the stability and performance of the new version to be verified gradually under the condition of not affecting the existing service. Through integrating continuous integration/continuous delivery (CI/CD) tool chains, version management covers links such as code compiling, function testing, environment adaptation, automatic deployment and the like, and combines a version rollback mechanism, so that controllability and safety of an upgrading process are ensured, and high-availability requirements are met.
The service policy injection module comprises a policy adaptation unit, a control plane unit, a registry synchronization unit and a byte enhancement unit, wherein the policy adaptation unit performs policy analysis and configuration processing according to rule information (comprising routing policies, authorization policies, retry policies, disconnection policies and the like) input by a user to obtain a dynamic policy configuration scheme of an adaptive main flow service framework, the control plane unit performs elasticity enhancement, event bus configuration, synchronization policy implementation, log recording and system telemetry data opening according to system running states and service requirements to obtain a service control plane with real-time monitoring, elasticity expansion and event driving capability, the registry synchronization unit ensures service state consistency of multiple centers or multiple areas by synchronizing service information of each registry in real time, provides technical support for cross-area load balancing and disaster recovery backup, synchronizes the health state, up-down information and metadata change of service instances in real time, ensures cross-area load balancing to be based on the latest service state decision, and the byte enhancement unit performs enhancement processing on target application according to service policies and dynamic loading requirements to obtain high-performance service instances of the injected logic policies.
The event bus configuration is that a real-time event is received through a data input layer, unified message body analysis rules are configured to generate unified context objects, a data processing layer is used for carrying out data analysis, filtering, enrichment and conversion through a filtering operator and an enrichment operator, processed data can be distributed to different message queues or storage systems through a data output layer, a custom script is configured while rapid support of a new data source or target is achieved through a connector mechanism, operator logic is dynamically adjusted, and frequent restarting service is not needed.
The filtering operator screens the input data through a predefined rule, reserves the data meeting the condition, and discards the data not meeting the condition. The core is a Boolean decision based on conditions, Y= { X ε X|P (X) = true }, where X is a set of input data and P (X) is a predicate function, defining a filter condition. Y represents an output data set containing only data satisfying P (x) =true. The enrichment operator supplements or enhances the input data by introducing an external data source or calculation logic, and provides more context information for downstream processing, specifically, Y= { (X, E (X))|x ε X } where X is a set of input data, E (X) represents an enrichment function, how to supplement or enhance the data X is defined, Y represents an output data set, and the enriched data is obtained. In an event bus, a filtering operator and an enriching operator usually work cooperatively in a form of a responsibility chain, namely, firstly, invalid or irrelevant events are filtered through the filtering operator, the data processing pressure is reduced, the context of a reserved data set is further enhanced by the enriching operator, and finally, the responsibility chain is output.
The implementation of the synchronization strategy specifically comprises the following steps:
i) Dynamic grouping and route planning, namely mapping a service logic grouping G to a specific resource group (such as a server or a theme queue), specifically, G=f (U, T), wherein G represents grouping, U represents available resource units such as a server, a service node and the like, and T represents tasks or themes.
For high-load scenarios, the priority-based demotion strategy ensures that critical events are handled preferentially, specifically a priority function P (x) =α 1·W(x)+α2 ·r (x), where P (x) represents priority, W (x) represents weight, R (x) describes resource occupancy ratio, and α 12 represents an adjustable coefficient.
Ii) filtering, parsing and distributing messages layer by layer through the responsibility chain mode to achieve real-time monitoring and link optimization. The data processing of each step is realized by an operator function, Y i+1=fi(Xi, wherein X i is input data, f i is operator logic, and Y i+1 is processed output data.
Iii) Using a bi-directional or multi-directional communication protocol (e.g., gRPC, MQTT), data consistency between different systems is ensured, in particular a synchronization window W s=max(Td-Tr, 0, where W s represents a synchronization delay window, T d represents a data generation time, and T r represents a data reception time.
The system comprises a working log record, a system telemetry data, a log record and a system telemetry data, wherein the working log record and the system telemetry data are opened through an open standard, a plurality of monitoring protocols (such as OpenTelemetry, prometheus) are supported, full-link tracking, performance index acquisition and real-time analysis are provided, the log record adopts a layered design, multi-dimensional information such as key events, errors, flow, resource utilization rate and the like is covered, the log level and output format are supported to be dynamically configured, different debugging requirements are met, and efficient log storage, query and alarm are realized through seamless integration with an external log system (such as ELK and Fluentd), and the system performance is optimized.
The byte enhancement module comprises a code interception unit, a modification point positioning unit, an instruction set operation unit and a code reloading unit, wherein the code interception unit reads and preprocesses a target code file according to binary files or intermediate representation file information when a program is loaded to obtain intermediate representation of the program for analysis, the modification point positioning unit selects specific code fragments (such as function entries or specific calls) which need to be enhanced according to the code information provided by the interception unit and combines enhancement requirements, and judges modification points meeting enhancement conditions to obtain a target modification point list, the instruction set operation unit performs instruction insertion, replacement or deletion processing according to the target modification point list, and injects required byte code logic (comprising entry enhancement, exit enhancement and behavior reloading) to obtain an enhanced instruction set, and the code reloading unit performs reloading processing according to the modified instruction set to cover the enhanced code into an operating environment to obtain enhanced high-performance program logic.
The registry synchronization comprises policy service execution, service controller management and context synchronization service coordination based on strong consistency, and adopts a synchronization mechanism based on log replication in combination with strong consistency synchronization based on vector clocks, specifically, L i={l1,l2,..,ln, the registry state is determined by a log sequence, S i=f(Li), and the log synchronization rule is L follower=Lleader,whereLleader=max(Lfollower), wherein f represents a state generating function. The actual service pressure measurement finds that the pressure of the main node is large, single-point faults are easy to occur, the synchronization time delay is prolonged when strong consistency is needed, and the consistency cost is high.
In the strong consistency synchronization, each node maintains a vector VC i, representing the timestamp of the global event observed by node i.Wherein the number of elements of the vector clock VC is equal to the number of nodes, and each element VC [ i ] is the local time of the node i. Each registry R i initializes the vector clock VC i with element 0, and when a service instance registers or de-registers, the registry R i adds its local vector clock VC i[i]←VCi i+1, where service state changes are represented by event E i, including vector clock VC i and change content. The synchronization process is as follows, registry R i sends event E i to other registries R j, receiver R j merges vector clocks: and judging whether to apply the change according to the event time stamp. If multiple events conflict, vector clock ordering rules are used: It is ensured that events are handled in the same order on all nodes, eliminating inconsistent states.
The health status of the synchronous service instance is that periodic health check is based on Ping, HTTP status code and business level heartbeat detection, the calculation of the health status updates status data in real time through a sliding window, and judgment due to old data interference is avoided ifThe health status of the ith instance at time t is 1. However, in practical application, the real-time performance of the sliding window method is found to be stronger, but potential problems can not be predicted in advance.
The health state is preferably predicted by a time sequence prediction algorithm, specifically, the method comprises the following steps of predicting an index with strong trend and periodicity by using an ARIMA model (autoregressive integral sliding average): Where p represents the order of the Autoregressive (AR) portion, q represents the order of the Moving Average (MA) portion, and ε t represents white noise. Firstly, checking the stability of the sequence (such as unit root test), carrying out differential processing on the non-stationary sequence, and selecting optimal P, q values by using AIC/BIC criteria. Fitting an ARIMA model, and calculating parameters phi, theta and c. Thereby predicting the future timing point X t+h. LSTM models (long-term short-term memory networks) can also be incorporated in the face of high-dimensional, nonlinear, complex health state data.
The long-term and short-term memory network learns the long-term dependency relationship of time sequence, namely forgetting gate, namely f t=σ(Wf·[ht-1,xt]+bf, input gate, namely i t=σ(Wi·[ht-1,xt]+bi through a cyclic neural network (RNN) architecture), Cell status update: Output gate o t=σ(Wo·[ht-1,xt]+bo),ht=ot⊙tanh(Ct), where x t is the current input, h t-1 is the hidden state at the last time, and C t is the current cell state. The health state index X t is organized into sequence data of a sliding window, an LSTM network is constructed, a sequence { X t-n,Xt-n+1,...,Xt } is input, and a prediction X t+1 is output, so that future health states can be predicted in a rolling mode.
The time sequence prediction algorithm is specifically realized by collecting historical health state indexes of service, such as response time, CPU (Central processing Unit) use rate, error rate and the like, and performing data preprocessing, namely stabilizing a time sequence and removing noise and trend. And (3) performing stability verification and determining optimal p, d and q parameters through AIC or BIC. Capturing linear trend by ARIMA under a distributed scene to obtain a residual sequence, performing nonlinear modeling on the residual sequence by using LSTM, converting the residual sequence into a time step format, receiving sequence data, and constructing multi-layer LSTM once capturing time sequence dependence. And obtaining a superposition result of the two: The parallel model inputs a time sequence to the ARIMA model and the LSTM model simultaneously, and the prediction results of the ARIMA model and the LSTM model are fused through weights: The method combines the linear prediction capability of ARIMA and the nonlinear modeling capability of LSTM, adds an Attention mechanism in LSTM, dynamically focuses on key time points, continuously updates a prediction model by utilizing a sliding window technology, adapts to the dynamic change of the health state, and realizes accurate prediction and early warning of the service health state so as to improve the robustness of the health state prediction.
The class management service on-demand loading refers to optimizing class loading flow, reducing system resource occupation and improving service starting efficiency and flexibility in operation through a delayed loading and dynamic loading technology. Including class full lifecycle management and class instantiation load optimization. The invention dynamically loads the class by using the self-defined class loader, and realizes the fine management of the class from creation, loading, initialization, use and unloading. By isolating class loading environments of different modules, the problem of class loading conflict is avoided. In connection with factory mode or proxy mode, the instantiation of classes is delayed, ensuring that relevant logic is loaded only at the actual invocation. And the on-demand loading mechanism is combined with the plug-in design of the invention, so that the plug-in module can be dynamically loaded and unloaded during operation.
The policy service execution specifically means that the cross-regional service treatment with strong consistency is realized through the steps of dynamic rule loading, real-time state sensing, policy execution and synchronization, multi-region consistency guarantee and the like. Firstly, loading policy rules and monitoring rule changes in a service starting stage, analyzing the policy rules into a unified policy model, and storing the unified policy model into a memory. And secondly, the registration center senses the health state, the online and offline information and the metadata change of the service instance in real time, and dynamically updates local decisions, including weight adjustment, current limiting threshold setting and route optimization. And broadcasting the strategy execution result to all nodes through an event bus or a registry, combining version control to ensure the consistency of the synchronous result, and if conflict is detected in the synchronization, utilizing a master-slave mechanism or a voting mode to solve the conflict, and rolling back to the previous version if necessary. And finally, optimizing the multi-region synchronization efficiency through incremental synchronization, batch processing and a transaction mechanism, simultaneously maintaining state consistency, supporting multi-protocol adaptation and ensuring that strategy synchronization in a heterogeneous system is correct.
The management of the service controller is to ensure that the service call under the multi-activity architecture is efficient, stable and reliable through fine-granularity control and strategic management, and the functions of service routing, load balancing, current limiting, degradation and the like are included, so that the service quality guarantee and the efficient resource utilization are realized. The service controller distributes the request to the service instance meeting the condition through the dynamic routing rule, perceives the service state in real time to adjust the route, dynamically updates the load balancing algorithm such as combining polling, weighted random or minimum connection number by using the weight to optimize the resource distribution, sets the flow limiting rule through the token bucket algorithm, limits the request rate according to the QPS or TPS, configures the emergency strategy to handle the overrun condition, and executes the degradation strategy based on the triggering condition when the service is not available, such as returning the default response or calling the standby service.
The context synchronization service coordination is used for ensuring the consistency of the context information of the cross-node service in the distributed multi-activity architecture. By this mechanism, services of different nodes can share critical context data, thereby enabling global visibility and consistency handling of the request links. Context information propagates between nodes through efficient serialization and network transport mechanisms. And through a synchronous protocol, the consistency of state data among nodes is ensured, a final consistency and strong consistency model is supported, and the specific selection depends on the requirements of service scenes.
In the strong coherency scenario, a distributed coherency protocol such as Paxos or Raft is commonly used. The nodes log the context changes, ensuring that multiple groups of nodes receive the same change log L: L commit={L1,L2,…,Ln},ifMajority(Li) =true, i.e., if log L i is acknowledged by multiple nodes, the changes are committed. And using the heartbeat mechanism H, ensures that the leader is present: Compression of the context data using Huffman coding or LZ algorithms, C compressed=fcompress (C), where f compress is a compression function, can significantly reduce the amount of data transmitted. This provides a strict consistency guarantee, applicable to critical context synchronization across data centers.
The full-link transparent transmission realization module comprises a message interception unit, a transparent transmission message definition unit, a data aggregation encapsulation unit and a link closed-loop unit, wherein the message interception unit performs request interception and metadata extraction processing according to interaction request information between a service consumer and a provider to obtain original data of transparent transmission messages including request identifications, context information and the like, the transparent transmission message definition unit performs analysis and standardization processing of a message format according to a predefined extended transparent transmission message structure to obtain standardized transparent transmission messages meeting the requirements of link tracking and flow management, the data aggregation encapsulation unit performs data aggregation and encapsulation processing according to the standardized transparent transmission messages to obtain transparent transmission data packets containing complete link information, and the link closed-loop unit performs flow closed-loop calculation and path optimization processing in a link according to the encapsulated transparent transmission data packets and combines a service unit and a lane rule to obtain an execution scheme of a closed-loop link.
As shown in fig. 2, the implementation device of the container mirror image generating system based on the agentless service grid framework in this embodiment includes an upper layer application, a container mirror image generating system and an external data layer.
The upper layer application function has rule input, strategy injection result feedback, multiple activity model visualization and service retrieval. The rule input can use Spring Boot to realize RESTful API and gRPC interface, receive JSON or YAML format rule, analyze rule content through ANTLR analyzer, dynamically enhance responsibility chain by means of Javassist or ByteBuddy, inject rule in real time, take effect without restarting, strategy injection feedback collect execution data by utilizing Prometaus, transfer result by adopting Kafka or RabbitMQ event bus, notify upper layer application through Webhook callback, multi-activity model visualization integration OpenTelemetry or custom probe collect node and flow data, use Redis to conduct data aggregation, push data update through WebSocket, and meanwhile dynamically render topology view through ECharts, service retrieval module Consul or Eureka synchronous service instance information, construct distributed index by using elastic search, and provide efficient multi-dimensional query and real-time update service.
The data layer operates with external data sources through a persistence layer interface, a transaction management framework (seata), and the like. The database is directly operated by using a standard JDBC (Java Database Connectivity) interface, or the database operation is simplified by using an ORM framework such as MyBatis, hibernate and the like, and the data integrity is ensured by combining database transaction and business logic. In processing data of document type, time series, key value, etc., it can cooperate with non-relational database by means of distributed buffer system or message queue (e.g. Kafka).
The service policy injection in the container mirror image generating system specifically comprises that a service instance sends a registration request to a registration center Nacos of a treatment policy module when being started, instance meta-information (service name, address, running state and the like) is provided, and a service discovery module dynamically analyzes the registration table according to the call request and returns a service instance list meeting the conditions. The configuration management function realizes dynamic configuration issuing through a subscription-release mode. After the service instance is started, relevant configuration update subjects are subscribed, and dynamic configuration data are combined with service discovery to influence the calling priority and routing behavior of the service. In combination with dynamic configuration, when the traffic exceeds a threshold or service performance decreases, the module automatically triggers a degradation and current limiting mechanism to ensure the stability of the system. The module periodically invokes the health check interface of the service instance to actively check the health status, and automatically marks the abnormal service instance by analyzing real-time data (such as error rate, delay, etc.) of the service invocation. In the overall service invocation chain, the modules synchronize the invocation contexts (e.g., trace ID, user tags), ensuring that the invocation information across services is consistent.
Through specific practical experiments, under the complex business scene of high concurrency multi-tenant, the device is started to operate by using the flow pressure of 1000TPS (transactions per second), 50 micro service instances and configuration parameters of 10 multi-living spaces, and the experimental data can be obtained, namely, the overall processing delay of the system is stabilized within 30 milliseconds, the global synchronization time after the adjustment of the flow scheduling rule is less than 1 second, the multi-tenant resource isolation accuracy reaches 99.9%, and the execution success rate of service strategy injection and dynamic enhancement is 98.7%. In the simulation experiment, the platform successfully realizes accurate flow distribution and complete link tracking, supports the display of a test window for blue-green deployment and canary release, and reduces the failure rate of version switching to 0.05%.
As shown in table 1, theoretical analysis shows that, through cooperative work among modules, such as a multidimensional rule configuration module, a full-link gray level isolation module, a service strategy injection module and a full-link transparent transmission implementation module, the platform can efficiently manage diversity in a complex system, and realize dynamic scheduling, real-time monitoring and elastic system expansion of resources in a high concurrency scene. The device can stably operate, has new functions of supporting rapid iteration, flexible expansion and performance optimization of the service, and provides comprehensive guarantee for multi-tenant architecture and modernized micro-service management of enterprises.
Table 1 comparison of technical characteristics
Compared with the prior art, in the aspect of flexibility, the multidimensional rule configuration module supports dynamic construction rules, is loaded and adjusted in real time based on a scene, and adopts a micro-kernel architecture to only reserve necessary modules, such as basic communication, plug-in management and the like, and other expansion functions are independently realized in the form of modules or plug-ins. The design reduces the complexity of the core and improves the stability and maintainability of the system. The module and the plug-in support loading or unloading according to the need, can flexibly adjust the functions according to the service requirement, provide flow strategies such as adhesion routing (ensuring that the same user request is always routed to the same instance) and label routing (determining a routing path according to a label field), and the full-link gray scale isolation module supports service isolation, multi-tenant environment and fine flow management. The byte code enhancement module allows for dynamic insertion of functions or repair problems at runtime, eliminating the hassle of restarting the service. The method is particularly suitable for scenes with high availability requirements, and service interruption is avoided. In the aspect of reliability, the service strategy injection module detects the health state of the service instance by introducing a mode of combining active detection and passive monitoring, wherein the method comprises the steps of resource utilization rate, network delay, response time and the like. When the health condition of the instance is poor, the flow can be switched to the healthy standby instance in real time, so that the continuity of the service is ensured. The design of multi-activity flow scheduling enables dynamic scheduling in different regions or available intervals to achieve fault isolation. And the distribution is adjusted according to the traffic pressure and the resource use condition through an intelligent routing strategy, so that single-point overload is avoided. And under the scene of higher policy consistency requirement, the event bus coordination and the strong synchronization guarantee ensure the synchronization reliability through a distributed transaction or a strong consistency algorithm. In the aspect of adaptability, flexible flow scheduling rules are provided, such as distribution according to priority, gray level release, user grouping and the like, so that the requirements of various service models are met, common load balancing algorithms (such as polling, minimum response time, hash routing and the like) are supported, and developers are allowed to customize the algorithms to adapt to special requirements. The routing rule can be dynamically adjusted without interrupting service, so that the service flexibility and instantaneity are ensured. The system is internally provided with support for common communication protocols, and allows new protocols to be expanded, so that the system is widely applicable to the existing service architecture. In the aspect of portability, all modules define functions through interfaces, so that the modules are allowed to be independently realized under different languages or frameworks, the cross-language compatibility is enhanced, the running environment is automatically adapted when the modules are loaded, and the unloaded modules cannot influence the performance and stability of core services. The framework is suitable for a complex micro-service system, can also play a role under a single framework, and is convenient for smooth transition. And the service discovery, automatic expansion and container arrangement capability of the Kubernetes are integrated in a native way, so that the containerized deployment efficiency is improved. The multiple clouds are compatible, so that migration among different cloud environments is facilitated, and manufacturer locking is avoided.
The foregoing embodiments may be partially modified in numerous ways by those skilled in the art without departing from the principles and spirit of the invention, the scope of which is defined in the claims and not by the foregoing embodiments, and all such implementations are within the scope of the invention.

Claims (10)

1.一种基于无代理服务网格框架的容器镜像生成系统,其特征在于,包括:多维规则配置模块、全链路灰度隔离模块、微服务策略注入模块、字节增强模块、链路透传实现模块和镜像生成模块,其中:多维规则配置模块根据用户系统开发文档中的多活架构模型和流量调度规则,解析规则配置参数并完成多活空间的配置,通过流量调度算法计算流量分布,生成配置文件;全链路灰度隔离模块基于多租户的资源和业务隔离需求,依据多活流量调度策略计算实例流量权重并动态生成资源隔离配置;微服务策略注入模块利用动态加载机制,在运行时对应用程序的字节码进行嵌入服务发现和链路追踪;字节增强模块对字节码进行移除冗余逻辑和方法内联优化;链路透传实现模块基于扩展透传消息结构,拦截服务请求并计算链路内流量闭环参数,通过封装处理生成完整链路配置文件;镜像生成模块将各模块的输出进行整合并使用容器化工具构建生成可部署的容器镜像文件。1. A container image generation system based on an agentless service grid framework, characterized in that it includes: a multi-dimensional rule configuration module, a full-link grayscale isolation module, a microservice policy injection module, a byte enhancement module, a link transparent transmission implementation module and an image generation module, wherein: the multi-dimensional rule configuration module parses the rule configuration parameters and completes the configuration of the multi-active space according to the multi-active architecture model and traffic scheduling rules in the user system development document, calculates the traffic distribution through the traffic scheduling algorithm, and generates a configuration file; the full-link grayscale isolation module calculates the instance traffic weight based on the multi-active traffic scheduling strategy based on the resource and business isolation requirements of multiple tenants and dynamically generates the resource isolation configuration; the microservice policy injection module uses a dynamic loading mechanism to embed service discovery and link tracking into the bytecode of the application at runtime; the byte enhancement module removes redundant logic and method inline optimization of the bytecode; the link transparent transmission implementation module intercepts service requests and calculates the closed-loop parameters of the traffic in the link based on the extended transparent transmission message structure, and generates a complete link configuration file through encapsulation processing; the image generation module integrates the outputs of each module and uses a containerization tool to build a deployable container image file. 2.根据权利要求1所述的基于无代理服务网格框架的容器镜像生成系统,其特征是,所述的多维规则配置模块包括:规则解析单元、流量分配单元、资源权重计算单元以及配置生成单元,其中:规则解析单元根据用户提供的多活架构模型、流量调度规则和域名配置参数,解析关键规则信息,提取多活空间的配置项和流量分配逻辑,得到可供计算的基础规则数据;流量分配单元根据规则解析单元提供的流量分配逻辑和多活空间定义,结合实时流量监控数据,对流量分布进行计算处理,得到流量在不同多活单元中的分配比例和路径规划结果;资源权重计算单元根据流量分配单元计算的流量比例和实例的资源可用性指标,计算并动态调整每个实例的资源权重,得到多租户间的资源分配结果;配置生成单元根据资源权重计算单元生成的权重数据和流量分配路径规划结果,进行配置文件的动态生成处理,得到用于服务网格调度的多维规则配置文件,供后续模块调用与部署使用。2. According to the container image generation system based on the proxyless service grid framework of claim 1, it is characterized in that the multi-dimensional rule configuration module includes: a rule parsing unit, a traffic allocation unit, a resource weight calculation unit and a configuration generation unit, wherein: the rule parsing unit parses key rule information according to the multi-active architecture model, traffic scheduling rules and domain name configuration parameters provided by the user, extracts the configuration items and traffic allocation logic of the multi-active space, and obtains the basic rule data available for calculation; the traffic allocation unit calculates and processes the traffic distribution according to the traffic allocation logic and multi-active space definition provided by the rule parsing unit, combined with real-time traffic monitoring data, to obtain the distribution ratio and path planning results of the traffic in different multi-active units; the resource weight calculation unit calculates and dynamically adjusts the resource weight of each instance according to the traffic ratio calculated by the traffic allocation unit and the resource availability index of the instance, and obtains the resource allocation result among multiple tenants; the configuration generation unit dynamically generates and processes the configuration file according to the weight data and traffic allocation path planning result generated by the resource weight calculation unit, and obtains a multi-dimensional rule configuration file for service grid scheduling for subsequent module calling and deployment. 3.根据权利要求1所述的基于无代理服务网格框架的容器镜像生成系统,其特征是,所述的全链路灰度隔离模块包括:多租户隔离单元、流量规则导入单元、实例流量分配单元以及版本管理单元,其中:多租户隔离单元根据租户的业务属性、资源使用需求和隔离策略,进行租户间资源和服务的隔离处理,得到确保不同租户数据和流量完全独立的多租户隔离方案;流量规则导入单元根据用户配置的流量分配规则(如用户属性、地理位置和业务特性等),进行规则解析与导入处理,得到适用于不同租户和业务场景的流量调度规则;实例流量分配单元根据流量规则导入单元的结果及系统中各实例的实时状态,通过基于责任链的粘连过滤器、标签过滤器和负载均衡过滤器进行流量分配计算和路径规划处理,得到每个实例的流量分配比例和执行方案;版本管理单元根据业务逻辑需求和系统升级策略,将不同版本的微服务运行在独立的泳道中,进行版本隔离和动态管理,得到支持蓝绿部署、金丝雀发布等场景的灵活升级方案。3. According to the container image generation system based on the proxyless service grid framework of claim 1, it is characterized in that the full-link grayscale isolation module includes: a multi-tenant isolation unit, a traffic rule import unit, an instance traffic distribution unit and a version management unit, wherein: the multi-tenant isolation unit performs isolation processing of resources and services between tenants according to the business attributes, resource usage requirements and isolation strategies of the tenants, and obtains a multi-tenant isolation scheme that ensures that the data and traffic of different tenants are completely independent; the traffic rule import unit performs rule parsing and import processing according to the traffic distribution rules configured by the user (such as user attributes, geographical location and business characteristics, etc.), and obtains traffic scheduling rules suitable for different tenants and business scenarios; the instance traffic distribution unit performs traffic distribution calculation and path planning processing through a adhesion filter, a label filter and a load balancing filter based on the chain of responsibility according to the results of the traffic rule import unit and the real-time status of each instance in the system, and obtains the traffic distribution ratio and execution scheme of each instance; the version management unit runs different versions of microservices in independent lanes according to business logic requirements and system upgrade strategies, performs version isolation and dynamic management, and obtains a flexible upgrade scheme that supports scenarios such as blue-green deployment and canary release. 4.根据权利要求1所述的基于无代理服务网格框架的容器镜像生成系统,其特征是,所述的服务策略注入模块包括:策略适配单元、控制平面单元、注册中心同步单元以及字节增强单元,其中:策略适配单元根据用户输入的规则信息,进行策略解析和配置处理,得到适配主流服务框架的动态策略配置方案;控制平面单元根据系统运行状态和业务需求,进行弹性增强、事件总线配置、同步策略实施、工作日志记录和系统遥测数据开放,得到具备实时监控、弹性扩展和事件驱动能力的服务控制平面;注册中心同步单元通过实时同步各注册中心的服务信息,确保多数据中心或多区域的服务状态一致性,为跨区域负载均衡和容灾备份提供技术支撑,实时同步服务实例的健康状态、上下线信息以及元数据的变更,确保跨区域负载均衡能够基于最新的服务状态决策;字节增强单元根据服务策略和动态加载需求,利用字节码操作技术对目标应用进行增强处理,得到已注入所需策略逻辑的高性能服务实例。4. According to the container image generation system based on the agentless service grid framework of claim 1, it is characterized in that the service policy injection module includes: a policy adaptation unit, a control plane unit, a registration center synchronization unit and a byte enhancement unit, wherein: the policy adaptation unit performs policy parsing and configuration processing according to the rule information input by the user, and obtains a dynamic policy configuration scheme adapted to the mainstream service framework; the control plane unit performs elasticity enhancement, event bus configuration, synchronization policy implementation, work log recording and system telemetry data opening according to the system operation status and business needs, and obtains a service control plane with real-time monitoring, elastic expansion and event-driven capabilities; the registration center synchronization unit ensures the consistency of service status of multiple data centers or multiple regions by synchronizing the service information of each registration center in real time, provides technical support for cross-regional load balancing and disaster recovery, and synchronizes the health status, online and offline information and metadata changes of service instances in real time to ensure that cross-regional load balancing can be based on the latest service status decision; the byte enhancement unit uses bytecode operation technology to enhance the target application according to the service policy and dynamic loading requirements to obtain a high-performance service instance that has been injected with the required policy logic. 5.根据权利要求1所述的基于无代理服务网格框架的容器镜像生成系统,其特征是,所述的字节增强模块包括:代码拦截单元、修改点定位单元、指令集操作单元以及代码重加载单元,其中:代码拦截单元根据程序加载时的二进制文件或中间表示文件信息,进行目标代码文件的读取和预处理,得到可供分析的程序中间表示;修改点定位单元根据拦截单元提供的代码信息,结合增强需求,选择需要增强的具体代码片段,并判断满足增强条件的修改点,得到目标修改点列表;指令集操作单元根据目标修改点列表,进行指令插入、替换或删除处理,注入所需的字节码逻辑,得到增强后的指令集;代码重加载单元根据修改后的指令集,进行重新加载处理,将增强后的代码覆盖到运行环境中,得到已完成增强的高性能程序逻辑。5. According to the container image generation system based on the agentless service grid framework described in claim 1, it is characterized in that the byte enhancement module includes: a code interception unit, a modification point positioning unit, an instruction set operation unit and a code reloading unit, wherein: the code interception unit reads and preprocesses the target code file according to the binary file or intermediate representation file information when the program is loaded, and obtains the intermediate representation of the program that can be analyzed; the modification point positioning unit selects the specific code fragments that need to be enhanced according to the code information provided by the interception unit and the enhancement requirements, and determines the modification points that meet the enhancement conditions to obtain a target modification point list; the instruction set operation unit inserts, replaces or deletes instructions according to the target modification point list, injects the required bytecode logic, and obtains the enhanced instruction set; the code reloading unit reloads according to the modified instruction set, overwrites the enhanced code into the running environment, and obtains the enhanced high-performance program logic. 6.根据权利要求1所述的基于无代理服务网格框架的容器镜像生成系统,其特征是,所述的全链路透传实现模块包括:消息拦截单元、透传消息定义单元、数据聚合封装单元以及链路闭环单元,其中:消息拦截单元根据服务消费者与提供者之间的交互请求信息,进行请求拦截和元数据提取处理,得到包含请求标识、上下文信息等透传消息的原始数据;透传消息定义单元根据预定义的扩展透传消息结构,进行消息格式的解析与规范化处理,得到符合链路追踪和流量管理需求的标准化透传消息;数据聚合封装单元根据标准化透传消息,进行数据聚合与封装处理,得到包含完整链路信息的透传数据包;链路闭环单元根据封装后的透传数据包,结合服务单元和泳道规则,进行链路内流量闭环计算和路径优化处理,得到闭环链路的执行方案。6. According to the container image generation system based on the proxyless service grid framework described in claim 1, it is characterized in that the full-link transparent transmission implementation module includes: a message interception unit, a transparent transmission message definition unit, a data aggregation and encapsulation unit and a link closed-loop unit, wherein: the message interception unit performs request interception and metadata extraction processing according to the interactive request information between the service consumer and the provider, and obtains the original data of the transparent transmission message including the request identifier, context information, etc.; the transparent transmission message definition unit performs message format parsing and normalization processing according to the predefined extended transparent transmission message structure, and obtains a standardized transparent transmission message that meets the requirements of link tracking and traffic management; the data aggregation and encapsulation unit performs data aggregation and encapsulation processing according to the standardized transparent transmission message, and obtains a transparent transmission data packet containing complete link information; the link closed-loop unit performs intra-link traffic closed-loop calculation and path optimization processing according to the encapsulated transparent transmission data packet, combined with the service unit and the lane rules, to obtain an execution plan for the closed-loop link. 7.根据权利要求2所述的基于无代理服务网格框架的容器镜像生成系统,其特征是,所述的资源权重计算是指:根据实时的负载反馈(如当前CPU使用率Ci),更新权重: 其中:α是平滑因子,用于控制新旧负载信息的融合比例;再通过深度确定性策略梯度算法,在动态环境下不仅通过简单反馈公式而能自适应地调节流量权重,具体包括:定义状态空间其中:Ci表示第i个服务器的CPU使用率,Ti表示第i个服务器的平均响应时间,Qi表示第i个服务器的当前队列长度;定义动作,即对各服务器权重的调整比例空间At={ΔW1,ΔW2,…,ΔWn};定义奖励函数,即需要衡量流量分配的效率其中:Var是流量分配的方差,越小越平衡,λ12是超参数,用于权衡平衡性和响应时间。7. According to the container image generation system based on the agentless service grid framework of claim 2, it is characterized in that the resource weight calculation refers to: updating the weight according to real-time load feedback (such as the current CPU usage rate C i ): Among them: α is a smoothing factor, which is used to control the fusion ratio of new and old load information; then through the deep deterministic policy gradient algorithm, in a dynamic environment, the traffic weight can be adaptively adjusted not only through a simple feedback formula, but also through the deep deterministic policy gradient algorithm, which includes: defining the state space Where: Ci represents the CPU usage of the ith server, Ti represents the average response time of the ith server, Qi represents the current queue length of the ith server; define the action, that is, the adjustment ratio space of the weight of each server At = { ΔW1 , ΔW2 , ..., ΔWn }; define the reward function, that is, the efficiency of traffic distribution needs to be measured Where: Var is the variance of traffic distribution, the smaller it is, the more balanced it is; λ 1 and λ 2 are hyperparameters used to weigh balance and response time. 8.根据权利要求4所述的基于无代理服务网格框架的容器镜像生成系统,其特征是,所述的注册中心同步包括:基于强一致性相关的策略服务执行、服务控制器治理和上下文同步服务协调,采用基于日志复制的同步机制结合基于矢量时钟的强一致性同步,具体为:Li={l1,l2,…,ln},注册中心状态由日志序列决定:Si=f(Li),日志同步规则为:Lfollower=lleader,whereLleader=max(Lfollower),其中:f代表状态生成函数,实际业务压测发现主节点压力大,易成为单点故障且需要强一致性时延长同步时延,一致性代价较高;8. According to claim 4, the container image generation system based on the agentless service grid framework is characterized in that the registration center synchronization includes: based on strong consistency-related policy service execution, service controller governance and context synchronization service coordination, a synchronization mechanism based on log replication is combined with strong consistency synchronization based on vector clock, specifically: Li = { l1 , l2 , ..., ln }, the registration center state is determined by the log sequence: Si = f( Li ), the log synchronization rule is: L follower = l leader , where L leader = max(L follower ), where: f represents the state generation function, actual business stress testing found that the master node is under great pressure and is prone to become a single point of failure. When strong consistency is required, the synchronization delay is extended, and the consistency cost is high; 所述的强一致性同步中,每个节点维护一个矢量VC[i],表示节点i所观察到的全局事件的时间戳,其中:矢量时钟VC的元素数等于节点数,每个元素VC[i]是节点i的本地时间,每个注册中心Ri初始化矢量时钟VCi,元素为0,当服务实例注册或注销时,注册中心Ri增加其本地矢量时钟:VCi[i]←VCi[i]+1,其中:服务状态变更用事件Ei表示,包含矢量时钟VCi和变更内容,同步过程如下:注册中心Ri将事件Ei发送给其他注册中心Rj,接收方Rj合并矢量时钟:根据事件时间戳判断是否应用变更,若多个事件发生冲突,使用矢量时钟排序规则: 因此可以确保事件在所有节点上按相同顺序处理,消除不一致状态。In the strong consistency synchronization described above, each node maintains a vector VC[i], which represents the timestamp of the global event observed by node i. Among them: the number of elements of the vector clock VC is equal to the number of nodes, each element VC[i] is the local time of node i, each registration center R i initializes the vector clock VC i , the element is 0, when the service instance is registered or deregistered, the registration center R i increases its local vector clock: VC i [i]←VC i [i]+1, among them: the service status change is represented by the event E i , which contains the vector clock VC i and the change content. The synchronization process is as follows: the registration center R i sends the event E i to other registration centers R j , and the receiver R j merges the vector clock: Determine whether to apply changes based on event timestamps. If multiple events conflict, use vector clock sorting rules: This ensures that events are processed in the same order on all nodes, eliminating inconsistent states. 9.根据权利要求8所述的基于无代理服务网格框架的容器镜像生成系统,其特征是,所述的策略服务执行具体是指:通过动态加载规则、实时状态感知、策略执行与同步、多区域一致性保障等步骤,实现强一致性的跨区域服务治理,首先,在服务启动阶段加载策略规则并监控规则变更,将其解析为统一的策略模型存储到内存中,其次,通过注册中心实时感知服务实例的健康状态、上下线信息和元数据变更,动态更新本地决策,包括权重调整、限流阈值设定和路由优化,然后,将策略执行结果通过事件总线或注册中心广播至所有节点,结合版本控制确保同步结果的一致性;若同步中检测到冲突,利用主从机制或投票方式解决,必要时回滚至上一版本,最后,通过增量同步、批量处理和事务机制优化多区域同步效率,同时保持状态一致性,支持多协议适配,确保异构系统中的策略同步无误;9. According to the container image generation system based on the agentless service grid framework of claim 8, it is characterized in that the policy service execution specifically refers to: through the steps of dynamic loading rules, real-time state perception, policy execution and synchronization, multi-region consistency assurance, etc., to achieve strong consistency cross-region service governance. First, in the service startup phase, the policy rules are loaded and the rule changes are monitored, and they are parsed into a unified policy model and stored in the memory. Secondly, the health status, online and offline information and metadata changes of the service instance are perceived in real time by the registration center, and local decisions are dynamically updated, including weight adjustment, current limiting threshold setting and routing optimization. Then, the policy execution results are broadcast to all nodes through the event bus or the registration center, and the consistency of the synchronization results is ensured in combination with version control. If a conflict is detected during synchronization, it is resolved by using the master-slave mechanism or voting method, and rolled back to the previous version when necessary. Finally, the efficiency of multi-region synchronization is optimized through incremental synchronization, batch processing and transaction mechanisms, while maintaining state consistency, supporting multi-protocol adaptation, and ensuring correct policy synchronization in heterogeneous systems. 所述的服务控制器治理是指:通过细粒度控制和策略化管理,确保多活架构下的服务调用高效、稳定、可靠,包括:服务的路由、负载均衡、限流、降级等功能,以实现服务质量保障和资源高效利用,服务控制器通过动态路由规则,将请求分发至符合条件的服务实例,实时感知服务状态以调整路由;利用权重动态更新结合轮询、加权随机或最小连接数等负载均衡算法,优化资源分配;通过令牌桶算法设置限流规则,根据QPS或TPS限制请求速率,并配置应急策略以处理超限情况;在服务不可用时,基于触发条件执行降级策略,如返回默认响应或调用备用服务;The service controller governance mentioned above means: ensuring efficient, stable and reliable service calls under the multi-active architecture through fine-grained control and strategic management, including: service routing, load balancing, current limiting, degradation and other functions to achieve service quality assurance and efficient resource utilization. The service controller distributes requests to qualified service instances through dynamic routing rules, and senses the service status in real time to adjust the routing; optimizes resource allocation by using dynamic weight updates combined with load balancing algorithms such as polling, weighted random or minimum number of connections; sets current limiting rules through the token bucket algorithm, limits the request rate according to QPS or TPS, and configures emergency strategies to handle over-limit situations; executes degradation strategies based on trigger conditions when the service is unavailable, such as returning a default response or calling a backup service; 所述的上下文同步服务协调用于在分布式多活架构中确保跨节点服务的上下文信息一致性,通过该机制,不同节点的服务可以共享关键上下文数据,从而实现请求链路的全局可见性和一致性处理,上下文信息通过高效的序列化和网络传输机制在节点间传播,通过同步协议,确保节点间状态数据的一致性,支持最终一致性和强一致性模型,具体选择取决于业务场景的要求。The context synchronization service coordination is used to ensure the consistency of context information across node services in a distributed multi-active architecture. Through this mechanism, services on different nodes can share key context data, thereby achieving global visibility and consistent processing of request links. Context information is propagated between nodes through efficient serialization and network transmission mechanisms. Through the synchronization protocol, the consistency of state data between nodes is ensured. Eventual consistency and strong consistency models are supported. The specific choice depends on the requirements of the business scenario. 10.一种基于权利要求1-9中任一所述基于无代理服务网格框架的容器镜像生成系统的实现装置,其特征在于,包括:上层应用、容器镜像生成系统和外部数据层;10. An implementation device of a container image generation system based on an agentless service grid framework according to any one of claims 1 to 9, characterized in that it comprises: an upper layer application, a container image generation system and an external data layer; 所述的上层应用功能有规则输入,策略注入结果反馈,多活模型可视化以及服务检索,规则输入可使用SpringBoot实现RESTfulAPI和gRPC接口,接收JSON或YAML格式规则;通过ANTLR解析器解析规则内容,并借助Javassist或ByteBuddy动态增强责任链,将规则实时注入,无需重启即可生效;策略注入反馈利用Prometheus采集执行数据,采用Kafka或RabbitMQ事件总线传递结果,并通过Webhook回调通知上层应用;多活模型可视化集成OpenTelemetry或自定义探针采集节点和流量数据,使用Redis进行数据聚合,通过WebSocket推送数据更新,同时结合ECharts动态渲染拓扑视图;服务检索模块Consul或Eureka同步服务实例信息,使用Elasticsearch构建分布式索引,提供高效的多维度查询和实时更新服务;The upper-layer application functions include rule input, policy injection result feedback, multi-active model visualization and service retrieval. The rule input can use SpringBoot to implement RESTfulAPI and gRPC interface to receive JSON or YAML format rules; the rule content is parsed by ANTLR parser, and the responsibility chain is dynamically enhanced with Javassist or ByteBuddy, and the rules are injected in real time, which can take effect without restart; the policy injection feedback uses Prometheus to collect execution data, uses Kafka or RabbitMQ event bus to transmit results, and notifies the upper-layer application through Webhook callback; the multi-active model visualization integrates OpenTelemetry or custom probes to collect node and traffic data, uses Redis for data aggregation, pushes data updates through WebSocket, and combines ECharts to dynamically render topology views; the service retrieval module Consul or Eureka synchronizes service instance information, uses Elasticsearch to build distributed indexes, and provides efficient multi-dimensional query and real-time update services; 所述的数据层通过持久层接口以及事务管理框架与外部数据源进行操作,使用标准的JDBC接口直接操作数据库,或通过ORM框架,简化数据库操作,结合数据库事务和业务逻辑保证数据完整性,在处理文档型,时间序列,键值等数据时,可以借助分布式缓存系统或消息队列与非关系型数据库协作;The data layer operates with external data sources through the persistence layer interface and the transaction management framework, uses the standard JDBC interface to directly operate the database, or simplifies database operations through the ORM framework, combines database transactions and business logic to ensure data integrity, and can collaborate with non-relational databases with the help of distributed cache systems or message queues when processing document-type, time series, key-value and other data; 所述的容器镜像生成系统中的服务策略注入具体包括:服务实例在启动时向治理策略模块的注册中心Nacos发出注册请求,提供实例元信息,服务发现模块根据调用请求动态解析注册表,返回符合条件的服务实例列表,配置管理功能通过订阅-发布模式实现动态配置下发,服务实例启动后订阅相关配置更新主题,动态配置数据与服务发现结合,影响服务的调用优先级和路由行为,结合动态配置,当流量超过阈值或服务性能降低时,模块自动触发降级和限流机制,确保系统稳定,模块定期调用服务实例的健康检查接口,主动检查健康状态,通过分析服务调用的实时数据,自动标记异常服务实例,在整体服务调用链中,模块同步调用上下文,确保跨服务的调用信息一致。The service policy injection in the container image generation system specifically includes: when the service instance is started, it sends a registration request to the registration center Nacos of the governance policy module to provide instance meta information. The service discovery module dynamically parses the registration table according to the call request and returns a list of qualified service instances. The configuration management function implements dynamic configuration distribution through the subscription-publish mode. After the service instance is started, it subscribes to the relevant configuration update topic. The dynamic configuration data is combined with service discovery to affect the call priority and routing behavior of the service. Combined with dynamic configuration, when the traffic exceeds the threshold or the service performance is reduced, the module automatically triggers the degradation and current limiting mechanism to ensure system stability. The module regularly calls the health check interface of the service instance to actively check the health status. By analyzing the real-time data of the service call, the abnormal service instance is automatically marked. In the overall service call chain, the module synchronizes the call context to ensure that the call information across services is consistent.
CN202510122047.3A 2025-01-26 2025-01-26 Container mirror image generating system based on agent-free service grid framework Pending CN120045285A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510122047.3A CN120045285A (en) 2025-01-26 2025-01-26 Container mirror image generating system based on agent-free service grid framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510122047.3A CN120045285A (en) 2025-01-26 2025-01-26 Container mirror image generating system based on agent-free service grid framework

Publications (1)

Publication Number Publication Date
CN120045285A true CN120045285A (en) 2025-05-27

Family

ID=95756069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510122047.3A Pending CN120045285A (en) 2025-01-26 2025-01-26 Container mirror image generating system based on agent-free service grid framework

Country Status (1)

Country Link
CN (1) CN120045285A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120807515A (en) * 2025-09-12 2025-10-17 天翼视联科技股份有限公司 Automatic training method and device for video AI algorithm model based on byte code enhancement

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120807515A (en) * 2025-09-12 2025-10-17 天翼视联科技股份有限公司 Automatic training method and device for video AI algorithm model based on byte code enhancement

Similar Documents

Publication Publication Date Title
US8112262B1 (en) Service modeling and virtualization
Homer et al. Cloud Design Patterns
CN109828831A (en) A kind of artificial intelligence cloud platform
CN115115329B (en) Intelligent production line-oriented manufacturing middleware device and cloud manufacturing architecture system
EP2959387B1 (en) Method and system for providing high availability for state-aware applications
Cheng et al. Making self-adaptation an engineering reality
Sun et al. Key Technologies for Big Data Stream Computing.
Chechina et al. Evaluating scalable distributed Erlang for scalability and reliability
CN120045285A (en) Container mirror image generating system based on agent-free service grid framework
Kumar Event-Driven App Design for High-Concurrency Microservices
Reddy et al. A Scalable Microservices Architecture for Real-Time Data Processing in Cloud-Based Applications.
CN113127441A (en) Method for dynamically selecting database components and self-assembly database management system
CN120560848A (en) Method, device, electronic device and storage medium for dynamically adjusting thread pool threads
Caromel et al. Peer-to-Peer and fault-tolerance: Towards deployment-based technical services
Křikava Domain-specific modeling language for self-adaptive software system architectures
Wolf et al. Supporting component-based failover units in middleware for distributed real-time and embedded systems
Halima et al. A large‐scale monitoring and measurement campaign for web services‐based applications
Zhou et al. A middleware platform for the dynamic evolution of distributed component-based systems
CN120803766A (en) Dynamic integration system and method for large electric power model platform and heterogeneous system
Ibáñez et al. Reconfigurable applications using gcmscript
Lin Uniformly Programmable, Distributed, Reliable, Event-based Systems for Multi-Tier IoT Deployments
Ameling et al. Replication in Service Oriented Architectures.
CN121210212A (en) Cloud architecture-based remote sensing data processing task hot-replacing method
Wolf Component-based Fault Tolerance for Distributed Real-Time and Embedded Systems
Solomon A real-time pattern based architecture for autonomic systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination