CN112217878A - High-concurrency request distribution method and system - Google Patents
High-concurrency request distribution method and system Download PDFInfo
- Publication number
- CN112217878A CN112217878A CN202011007291.9A CN202011007291A CN112217878A CN 112217878 A CN112217878 A CN 112217878A CN 202011007291 A CN202011007291 A CN 202011007291A CN 112217878 A CN112217878 A CN 112217878A
- Authority
- CN
- China
- Prior art keywords
- request
- service
- client
- high concurrency
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0807—Network architectures or network communication protocols for network security for authentication of entities using tickets, e.g. Kerberos
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/08—Protocols for interworking; Protocol conversion
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention discloses a high-concurrency request distribution method and a high-concurrency request distribution system, wherein the method comprises the following steps of: s1, receiving a high concurrency request sent by the client to the server; s2 step for client to forward request to upstream service through router; s3 step of processing high concurrency request and forwarding by unified interface scheduling entrance; and the S4 client receives and processes the return result. Therefore, the high-concurrency request distribution method and the high-concurrency request distribution system provided by the invention can realize mutual communication between two mutually independent systems through the router, forward the request to an upstream service for the client and process a return result. The system uses a distributed architecture, the interface configuration is updated in real time, the request is transmitted for millisecond response, and the problems are found and positioned in time through log monitoring, so that the problems can be solved by related responsible persons, and the working efficiency is improved.
Description
Technical Field
The invention relates to the technical field of computer software, in particular to a high-concurrency request distribution method and system.
Background
High concurrency means that a large number of requests arrive at the server side at the same time or in a very short time, and each request needs the server side to consume resources for processing and make corresponding feedback. From the perspective of the server, the high-concurrency server needs to consume the resources of the server, such as the number of processes that can be simultaneously started, the number of threads that can be simultaneously run, the number of network connections, the CPU, the I/O, the memory, and the like. In the prior art, when a high-concurrency technology is used for requesting, the response speed of the request is low, and the working efficiency is reduced.
Disclosure of Invention
The invention aims to provide a high-concurrency request distribution method and system.
The invention provides a high-concurrency request distribution method, which comprises the following steps:
s1, receiving a high concurrency request sent by the client to the server;
s2 step for client to forward request to upstream service through router;
s3 step of processing high concurrency request and forwarding by unified interface scheduling entrance;
and the S4 client receives and processes the return result.
The step of processing and forwarding the high concurrency request by the S3 unified interface scheduling entry includes: s31 permission judgment, which is used for judging whether the access request end has permission to enter the system when an application or a client outside the system accesses the system, if so, entering the system through authentication, otherwise, entering the system without permission; s32 protocol conversion step, which is used to convert protocols by generalization calling mode if the transmission protocols are not consistent; s33, a step of load balancing, which is used for calling the service of horizontal expansion; s34, a step of distributed traffic limitation, which is used for limiting the total traffic of the system by adopting a token bucket algorithm if the requested traffic exceeds the bearing range of the system; and S35, monitoring the log, monitoring the interface abnormity in real time, analyzing the interface calling rule regularly, and pushing the monitoring report to the corresponding user through the mail. The high concurrency request is an HTTP request. The step of processing and forwarding the high concurrency request by the S3 unified interface scheduling entrance adopts a responsibility chain mode, and the step of processing the request and the step of processing are separated to process the request step by step. In the step of judging the authority of S31, the user acquires Token through the login service, stores the acquired Token in the client, and puts the Token in a request to send to the server at each request. And when the micro service is dynamically expanded, acquiring the registration information of the micro service through the service registration center. The step of S34 distributed traffic limiting includes: and fusing and degrading, namely fusing and degrading the service when the application service is abnormally unavailable, monitoring the existing problems through a service registration center, and randomly recovering the routing request to the service if the service is recovered.
The invention also provides a high-concurrency request distribution system, which comprises:
the module is used for receiving a high concurrency request sent by a client to a server;
a module for the client to forward the request to the upstream service through the router;
the module is used for processing and forwarding the high concurrency request by the unified interface scheduling entrance;
and the client receives and processes the return result.
The module for processing and forwarding the high concurrency request by the uniform interface scheduling entrance comprises: the sub-module for judging the authority is used for judging whether the access request terminal has the authority to enter the system when an application or a client outside the system accesses the system, if so, the access request terminal enters the system through authentication, otherwise, the access request terminal does not have the authority to enter the system; the protocol conversion submodule is used for converting protocols through a generalization calling mode if the transmission protocols are inconsistent; the load balancing submodule is used for calling the horizontally expanded service; the distributed flow limiting submodule is used for limiting the total flow of the system by adopting a token bucket algorithm if the requested flow exceeds the bearing range of the system; and the sub-module of the monitoring log is used for monitoring the abnormity of the interface in real time, analyzing the interface calling rule at regular time and pushing a monitoring report to a corresponding user through a mail. The high concurrency request is an HTTP request. The module for processing and forwarding the high concurrency request at the unified interface scheduling entrance adopts a responsibility chain mode, separates the module for processing the request from the module for processing the request and processes the request step by step. In the sub-module for judging the authority, a user acquires a Token through a login service, stores the acquired Token to a client, and puts the Token into a request to be sent to a server during each request. And when the micro service is dynamically expanded, acquiring the registration information of the micro service through the service registration center. The sub-modules of the S34 distributed flow restriction include: and the fusing degradation unit is used for fusing degradation processing on the service when the application service is abnormal and unavailable, simultaneously monitoring the existing problems through the service registration center, and randomly recovering the routing request to the service if the service is recovered.
The invention provides a high-concurrency request distribution method and a high-concurrency request distribution system. The system uses a distributed architecture, the interface configuration is updated in real time, the request is transmitted for millisecond response, and the problems are found and positioned in time through log monitoring, so that the problems can be solved by related responsible persons, and the working efficiency is improved.
Drawings
Fig. 1 is a schematic step diagram of a high concurrency request distribution method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a step of processing and forwarding a high concurrency request by the S3 unified interface scheduling entry according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
As shown in fig. 1, the present embodiment provides a high concurrency request distribution method, including the following steps:
s1 is used for receiving the high concurrency request sent by the client to the server;
s2 is used for the step that the customer end transmits the request to the upstream service through the router;
s3 is used for the step that the unified interface scheduling entry processes the high concurrency request and forwards;
s4 is used for the step that the client receives and processes the return result.
Those skilled in the art can understand that the highly concurrent request distribution system can realize mutual communication between two mutually independent systems through a router, forward the request to an upstream service for the client, and process the return result. The system uses a distributed architecture, the interface configuration is updated in real time, the request is transmitted for millisecond response, and the problems are found and positioned in time through log monitoring, so that the problems can be solved by related responsible persons, and the working efficiency is improved.
As shown in fig. 2, the step of processing and forwarding the high concurrency request by the S3 unified interface scheduling entry includes:
s31 permission judgment, which is used for judging whether the access request end has permission to enter the system when an application or a client outside the system accesses the system, if so, entering the system through authentication, otherwise, entering the system without permission;
s32 protocol conversion step, which is used to convert protocols by generalization calling mode if the transmission protocols are not consistent;
s33, a step of load balancing, which is used for calling the service of horizontal expansion;
s34, a step of distributed traffic limitation, which is used for limiting the total traffic of the system by adopting a token bucket algorithm if the requested traffic exceeds the bearing range of the system;
and S35, monitoring the log, monitoring the interface abnormity in real time, analyzing the interface calling rule regularly, and pushing the monitoring report to the corresponding user through the mail.
Those skilled in the art will understand that the generalized interface calling method is mainly used in the case that the client has no API interface and model type element, and all POJOs in the parameters and return values are represented by maps, and are usually used for framework integration, for example, a general service test framework is implemented, and all service implementations can be called through GenericService. The high concurrency request distribution method provided by the embodiment can realize the functions of verification, authentication, load balancing, service management, protocol conversion, current limiting, fusing degradation, data caching and error warning lamps. The system comprises a core gateway, an interface management interface, a monitoring log and other main modules. The core of the token bucket mode is a fixed "import" rate. The token is taken first, and then the request is processed, and the flow intervenes without taking the token. The token bucket algorithm is used for limiting the number of the incoming requests, so that the current request amount is ensured to be within the bearing range of the system, and the stability of the system is provided. And (4) asynchronously processing the log, and analyzing the log in real time by using FileBeat + Kafka to realize abnormal alarm.
Further, the high concurrency request is an HTTP request. The conversion between the protocols is realized through a generalization calling mode, and the HTTP protocol is uniformly used in the system. And NIO asynchronous requests are processed by using Netty, so that the system throughput is improved.
Further, the step of processing and forwarding the high concurrency request by the unified interface scheduling entry of the S3 adopts a responsibility chain mode, and the step of processing the request and the step of processing are separated, and the request is processed step by step.
It will be appreciated by those skilled in the art that messages flow from the first processing step and flow from the last processing step, each step processing the passing messages, the entire process forming a chain.
Further, in the step of judging the authority of S31, the user acquires Token through the login service, stores the acquired Token in the client, and puts the Token in a request to the server at each request.
Those skilled in the art can understand that the Token issued by the system is brought in by each request of the client, and the system security call is realized through unified authentication.
Further, when the micro-service dynamically expands the capacity, the registration information of the micro-service is acquired through the service registration center, so that load balancing is realized. If the upstream service adopts the micro-service architecture, the upstream service can also cooperate with the registry to realize dynamic load balancing. When multiple copies of the same application are hooked behind the gateway, each user's request is routed to the corresponding service through the gateway's load balancing algorithm.
Further, the step of S34 distributed traffic limiting includes:
and fusing and degrading, namely fusing and degrading the service when the application service is abnormally unavailable, monitoring the existing problems through a service registration center, and randomly recovering the routing request to the service if the service is recovered.
Those skilled in the art can understand that when an unstable state occurs in a certain resource in a call link, for example, a call timeout or an abnormal proportion is increased, the call of the resource is limited, so that a request is failed quickly, and a cascade error caused by affecting other resources is avoided.
Further, the route is rewritten, and the gateway can know the service to be accessed according to the URL address resolution of the request. The request is then routed to the target service via the routing table. Sometimes the service may be temporarily unavailable for network reasons, at which time it is desirable to retry the service again. And forwarding of the HTTP interface and the micro-service registration interface is supported.
Those skilled in the art will understand that the high concurrency request forwarding platform provided by the present embodiment includes: distributed current limiting, which limits the total flow of the system and avoids exceeding the bearing range; rewriting the route, supporting the forwarding of an HTTP interface and a micro-service interface, receiving the HTTPs request, forwarding the HTTPs request to the background micro-service according to the routing rule, but in the default condition, when the HTTPs request is forwarded to the background micro-service, the HTTPs request is still the HTTPs request; interface management, configuring an interface, and issuing a Token for the client to realize authentication; and log monitoring, namely monitoring the abnormity of the interface in real time, analyzing the calling rule of the interface regularly, and pushing a monitoring report to related personnel through mails.
Example two
The embodiment provides a high concurrency request distribution system, which includes:
the module is used for receiving a high concurrency request sent by a client to a server;
a module for the client to forward the request to the upstream service through the router;
the module is used for processing and forwarding the high concurrency request by the unified interface scheduling entrance;
and the client receives and processes the return result.
Those skilled in the art can understand that the highly concurrent request distribution system can realize mutual communication between two mutually independent systems through a router, forward the request to an upstream service for the client, and process the return result. The system uses a distributed architecture, the interface configuration is updated in real time, the request is transmitted for millisecond response, and the problems are found and positioned in time through log monitoring, so that the problems can be solved by related responsible persons, and the working efficiency is improved.
Further, the module for processing and forwarding the high concurrency request by the uniform interface scheduling entry includes:
the sub-module for judging the authority is used for judging whether the access request terminal has the authority to enter the system when an application or a client outside the system accesses the system, if so, the access request terminal enters the system through authentication, otherwise, the access request terminal does not have the authority to enter the system;
the protocol conversion submodule is used for converting protocols through a generalization calling mode if the transmission protocols are inconsistent;
the load balancing submodule is used for calling the horizontally expanded service;
the distributed flow limiting submodule is used for limiting the total flow of the system by adopting a token bucket algorithm if the requested flow exceeds the bearing range of the system;
and the sub-module of the monitoring log is used for monitoring the abnormity of the interface in real time, analyzing the interface calling rule at regular time and pushing a monitoring report to a corresponding user through a mail.
Those skilled in the art will understand that the generalized interface calling method is mainly used in the case that the client has no API interface and model type element, and all POJOs in the parameters and return values are represented by maps, and are usually used for framework integration, for example, a general service test framework is implemented, and all service implementations can be called through GenericService. The high concurrency request distribution method provided by the embodiment can realize the functions of verification, authentication, load balancing, service management, protocol conversion, current limiting, fusing degradation, data caching and error warning lamps. The system comprises a core gateway, an interface management interface, a monitoring log and other main modules. The core of the token bucket mode is a fixed "import" rate. The token is taken first, and then the request is processed, and the flow intervenes without taking the token. The token bucket algorithm is used for limiting the number of the incoming requests, so that the current request amount is ensured to be within the bearing range of the system, and the stability of the system is provided. And (4) asynchronously processing the log, and analyzing the log in real time by using FileBeat + Kafka to realize abnormal alarm. The corresponding user is not a technician involved.
Further, the high concurrency request is an HTTP request. The conversion between the protocols is realized through a generalization calling mode, and the HTTP protocol is uniformly used in the system. And NIO asynchronous requests are processed by using Netty, so that the system throughput is improved.
Furthermore, the module for processing and forwarding the high concurrency request at the unified interface scheduling entrance adopts a responsibility chain mode, and separates the module for processing the request from the module for processing the request, and processes the request step by step.
It will be appreciated by those skilled in the art that messages flow from the first processing step and flow from the last processing step, each step processing the passing messages, the entire process forming a chain.
Further, in the sub-module for judging the authority, the user acquires the Token through the login service, stores the acquired Token to the client, and puts the Token into the request to be sent to the server in each request.
Those skilled in the art can understand that the Token issued by the system is brought in by each request of the client, and the system security call is realized through unified authentication.
Further, when the micro-service dynamically expands the capacity, the registration information of the micro-service is acquired through the service registration center, so that load balancing is realized. If the upstream service adopts the micro-service architecture, the upstream service can also cooperate with the registry to realize dynamic load balancing. When multiple copies of the same application are hooked behind the gateway, each user's request is routed to the corresponding service through the gateway's load balancing algorithm.
Further, the sub-module of the S34 distributed flow restriction includes:
and the fusing degradation unit is used for fusing degradation processing on the service when the application service is abnormal and unavailable, simultaneously monitoring the existing problems through the service registration center, and randomly recovering the routing request to the service if the service is recovered.
Those skilled in the art can understand that when an unstable state occurs in a certain resource in a call link, for example, a call timeout or an abnormal proportion is increased, the call of the resource is limited, so that a request is failed quickly, and a cascade error caused by affecting other resources is avoided.
Further, the route is rewritten, and the gateway can know the service to be accessed according to the URL address resolution of the request. The request is then routed to the target service via the routing table. Sometimes the service may be temporarily unavailable for network reasons, at which time it is desirable to retry the service again. And forwarding of the HTTP interface and the micro-service registration interface is supported.
Those skilled in the art will understand that the high concurrency request forwarding platform provided by the present embodiment includes: distributed current limiting, which limits the total flow of the system and avoids exceeding the bearing range; rewriting the route, supporting the forwarding of an HTTP interface and a micro-service interface, receiving the HTTPs request, forwarding the HTTPs request to the background micro-service according to the routing rule, but in the default condition, when the HTTPs request is forwarded to the background micro-service, the HTTPs request is still the HTTPs request; interface management, configuring an interface, and issuing a Token for the client to realize authentication; and log monitoring, namely monitoring the abnormity of the interface in real time, analyzing the calling rule of the interface regularly, and pushing a monitoring report to related personnel through mails.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (14)
1. A high concurrency request distribution method is characterized by comprising the following steps:
s1, receiving a high concurrency request sent by the client to the server;
s2 step for client to forward request to upstream service through router;
s3 step of processing high concurrency request and forwarding by unified interface scheduling entrance;
and the S4 client receives and processes the return result.
2. The method for distributing high concurrent requests according to claim 1, wherein the step of the S3 unified interface scheduling entry processing and forwarding the high concurrent requests comprises:
s31 permission judgment, which is used for judging whether the access request end has permission to enter the system when an application or a client outside the system accesses the system, if so, entering the system through authentication, otherwise, entering the system without permission;
s32 protocol conversion step, which is used to convert protocols by generalization calling mode if the transmission protocols are not consistent;
s33, a step of load balancing, which is used for calling the service of horizontal expansion;
s34, a step of distributed traffic limitation, which is used for limiting the total traffic of the system by adopting a token bucket algorithm if the requested traffic exceeds the bearing range of the system;
and S35, monitoring the log, monitoring the interface abnormity in real time, analyzing the interface calling rule regularly, and pushing the monitoring report to the corresponding user through the mail.
3. The high concurrency request distribution method according to claim 2, wherein the high concurrency request is an HTTP request.
4. The method for distributing high concurrent requests according to claim 3, wherein the step of processing and forwarding the high concurrent requests by the S3 unified interface scheduling entry adopts a chain mode of responsibility, and separates the step of processing the requests from the step of processing the requests, and processes the requests step by step.
5. The method for distributing high concurrent requests according to claim 4, wherein in the step of S31 permission judgment, the user acquires Token through a login service, stores the acquired Token in the client, and puts the Token in a request to be sent to the server at each request.
6. The method according to claim 5, wherein registration information of the micro-service is obtained through the service registry when the micro-service is dynamically expanded.
7. The high concurrent request distribution method according to claim 6, wherein the step of S34 distributed traffic limiting includes:
and fusing and degrading, namely fusing and degrading the service when the application service is abnormally unavailable, monitoring the existing problems through a service registration center, and randomly recovering the routing request to the service if the service is recovered.
8. A high concurrency request distribution system, comprising:
the module is used for receiving a high concurrency request sent by a client to a server;
a module for the client to forward the request to the upstream service through the router;
the module is used for processing and forwarding the high concurrency request by the unified interface scheduling entrance;
and the client receives and processes the return result.
9. The high concurrency request distributor system according to claim 8, wherein the module for the unified interface schedule entry to process and forward the high concurrency request comprises:
the sub-module for judging the authority is used for judging whether the access request terminal has the authority to enter the system when an application or a client outside the system accesses the system, if so, the access request terminal enters the system through authentication, otherwise, the access request terminal does not have the authority to enter the system;
the protocol conversion submodule is used for converting protocols through a generalization calling mode if the transmission protocols are inconsistent;
the load balancing submodule is used for calling the horizontally expanded service;
the distributed flow limiting submodule is used for limiting the total flow of the system by adopting a token bucket algorithm if the requested flow exceeds the bearing range of the system;
and the sub-module of the monitoring log is used for monitoring the abnormity of the interface in real time, analyzing the interface calling rule at regular time and pushing a monitoring report to a corresponding user through a mail.
10. The high concurrency request distribution system according to claim 9, wherein the high concurrency request is an HTTP request.
11. The high concurrency request distribution system according to claim 10, wherein the module for the unified interface scheduling entry to process and forward the high concurrency request employs a chain-of-responsibility mode, which separates the module for processing the request from the module for processing the request, and processes the request step by step.
12. The system of claim 11, wherein in the sub-module for determining the authority, the user obtains the Token through the login service, stores the obtained Token in the client, and sends the Token in the request to the server at each request.
13. The high concurrency request distribution system of claim 12, wherein registration information for the microservice is obtained through the service registry when the microservice is dynamically expanded.
14. The high concurrency request distribution system of claim 13, wherein said sub-modules of distributed traffic limiting comprise: and the fusing degradation unit is used for fusing degradation processing on the service when the application service is abnormal and unavailable, simultaneously monitoring the existing problems through the service registration center, and randomly recovering the routing request to the service if the service is recovered.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011007291.9A CN112217878A (en) | 2020-09-23 | 2020-09-23 | High-concurrency request distribution method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011007291.9A CN112217878A (en) | 2020-09-23 | 2020-09-23 | High-concurrency request distribution method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112217878A true CN112217878A (en) | 2021-01-12 |
Family
ID=74050711
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011007291.9A Pending CN112217878A (en) | 2020-09-23 | 2020-09-23 | High-concurrency request distribution method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112217878A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113411208A (en) * | 2021-05-28 | 2021-09-17 | 青岛海尔科技有限公司 | System, device for distributed traffic management |
CN113824554A (en) * | 2021-08-30 | 2021-12-21 | 山东健康医疗大数据有限公司 | Dynamic authentication method and device for data transmission between middleware and computer medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109672612A (en) * | 2018-12-13 | 2019-04-23 | 中国电子科技集团公司电子科学研究院 | API gateway system |
CN110908658A (en) * | 2019-11-15 | 2020-03-24 | 国网电子商务有限公司 | A "micro-service + micro-application" system, data processing method and device |
CN111083199A (en) * | 2019-11-23 | 2020-04-28 | 上海畅星软件有限公司 | High-concurrency, high-availability and service-extensible platform-based processing architecture |
CN111130892A (en) * | 2019-12-27 | 2020-05-08 | 上海浦东发展银行股份有限公司 | Enterprise-level microservice management system and method |
CN111488135A (en) * | 2019-01-28 | 2020-08-04 | 珠海格力电器股份有限公司 | Current limiting method and device for high-concurrency system, storage medium and equipment |
-
2020
- 2020-09-23 CN CN202011007291.9A patent/CN112217878A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109672612A (en) * | 2018-12-13 | 2019-04-23 | 中国电子科技集团公司电子科学研究院 | API gateway system |
CN111488135A (en) * | 2019-01-28 | 2020-08-04 | 珠海格力电器股份有限公司 | Current limiting method and device for high-concurrency system, storage medium and equipment |
CN110908658A (en) * | 2019-11-15 | 2020-03-24 | 国网电子商务有限公司 | A "micro-service + micro-application" system, data processing method and device |
CN111083199A (en) * | 2019-11-23 | 2020-04-28 | 上海畅星软件有限公司 | High-concurrency, high-availability and service-extensible platform-based processing architecture |
CN111130892A (en) * | 2019-12-27 | 2020-05-08 | 上海浦东发展银行股份有限公司 | Enterprise-level microservice management system and method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113411208A (en) * | 2021-05-28 | 2021-09-17 | 青岛海尔科技有限公司 | System, device for distributed traffic management |
CN113824554A (en) * | 2021-08-30 | 2021-12-21 | 山东健康医疗大数据有限公司 | Dynamic authentication method and device for data transmission between middleware and computer medium |
CN113824554B (en) * | 2021-08-30 | 2024-02-13 | 山东浪潮智慧医疗科技有限公司 | Dynamic authentication method, device and computer medium for data transmission between middleware |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1558001B1 (en) | Method and apparatus for operating an open network having a proxy | |
CN110554927A (en) | Micro-service calling method based on block chain | |
CN112104754B (en) | Network proxy method, system, device, equipment and storage medium | |
US20030105801A1 (en) | Method, system and agent for connecting event consumers to event producers in a distributed event management system | |
US20030003932A1 (en) | Messaging applications router | |
CN108055157B (en) | Service node acquisition method and device | |
US8693981B1 (en) | Monitoring persistent client connection status in a distributed server environment | |
CN112769687A (en) | API gateway platform | |
CN103581276A (en) | Cluster management device and system, service client side and corresponding method | |
CN115086176B (en) | System for realizing dynamic issuing of service administration strategy based on spring cloud micro-service technology | |
US7783786B1 (en) | Replicated service architecture | |
CN112217878A (en) | High-concurrency request distribution method and system | |
CN112383429A (en) | Self-adaptive switching gateway configuration system for multi-modal service | |
WO2002082727A1 (en) | Method for collecting a network performance information, computer readable medium storing the same, and an analysis system and method for network performance | |
CN114189525A (en) | Service request method and device and electronic equipment | |
US20040003007A1 (en) | Windows management instrument synchronized repository provider | |
CN111935312A (en) | Industrial Internet container cloud platform and flow access control method thereof | |
US20060069777A1 (en) | Request message control method for using service and service providing system | |
CN111614726B (en) | Data forwarding method, cluster system and storage medium | |
US9967163B2 (en) | Message system for avoiding processing-performance decline | |
CN106209602B (en) | Cross-data center multi-path high-reliability message transmission system | |
CN111949902B (en) | CDN content refresh system and method | |
CN113259185B (en) | Network management agent and network element management platform | |
KR101747032B1 (en) | Modular controller in software defined networking environment and operating method thereof | |
US20060168113A1 (en) | File transfer management systems and methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210112 |
|
RJ01 | Rejection of invention patent application after publication |