[go: up one dir, main page]

CN106657232A - Distributed server configuration and service method thereof - Google Patents

Distributed server configuration and service method thereof Download PDF

Info

Publication number
CN106657232A
CN106657232A CN201610865425.8A CN201610865425A CN106657232A CN 106657232 A CN106657232 A CN 106657232A CN 201610865425 A CN201610865425 A CN 201610865425A CN 106657232 A CN106657232 A CN 106657232A
Authority
CN
China
Prior art keywords
service
exchange system
data exchange
service request
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610865425.8A
Other languages
Chinese (zh)
Inventor
程林
侯冬刚
杨培强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Business System Co Ltd
Original Assignee
Shandong Inspur Business System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Business System Co Ltd filed Critical Shandong Inspur Business System Co Ltd
Priority to CN201610865425.8A priority Critical patent/CN106657232A/en
Publication of CN106657232A publication Critical patent/CN106657232A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1042Peer-to-peer [P2P] networks using topology management mechanisms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1061Peer-to-peer [P2P] networks using node-based peer discovery mechanisms
    • H04L67/1065Discovery involving distributed pre-established resource-based relationships among peers, e.g. based on distributed hash tables [DHT] 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention provides a distributed server configuration and a service method thereof. The distributed server configuration comprises a first front-end application, a data exchange system and a central application; the first front-end application receives a first service request initiated by a user, and send the first service request to the data exchange system; according to a service type corresponding to the first service request, the data exchange system calls a service interface corresponding to the central application, receives a first service processing result back fed by the corresponding service interface, and sends the first service processing result to the first front-end application; and the central application provides service interfaces corresponding to different service types, carries out service processing on the first service request by utilizing the called service interface, and back feeds the first service processing result to the data exchange system. According to the distributed server configuration and the service method thereof, a relatively large amount of services can be born, and the down probability of the server caused by the large service amount is reduced.

Description

Distributed server architecture and service method thereof
Technical Field
The present invention relates to the field of server technologies, and in particular, to a distributed server architecture and a service method thereof.
Background
With the continuous development of network technology, more and more services can be transacted on the internet, so as to improve the transaction efficiency. In order to implement online business handling, an enterprise needs to provide an online business handling function through a server architecture.
The existing server architecture adopts a centralized network infrastructure, and includes a processing module, which simultaneously implements receiving of a service request initiated by a user, processing of the service request, and feedback of a processing result to the user.
However, as the number of services increases, the existing server architecture cannot meet the requirement of processing the large number of services, and when the number of services reaches a value that the server architecture can bear, a problem of down-going of the server may be caused.
Disclosure of Invention
The embodiment of the invention provides a distributed server architecture and a service method thereof, which aim to reduce the probability of server downtime caused by large service quantity.
A distributed server architecture, comprising: a first front-end application, a data exchange system and a central application; wherein,
the first front-end application is used for displaying service data to a user, receiving a first service request initiated according to the displayed service data, sending the first service request to the data exchange system, and feeding back a service processing result sent by the data exchange system to the user;
the data exchange system is configured to receive the first service request, call a service interface corresponding to the central application according to a service type corresponding to the first service request, receive a first service processing result fed back by the corresponding service interface, and send the first service processing result to the first front-end application;
the central application is configured to provide service interfaces corresponding to different service types, perform service processing on the first service request by using the called service interface, and feed back a result of the first service processing to the data exchange system.
Preferably, the data exchange system comprises: the system comprises a unified service access interface, a message queue, a core processing module and a protocol adapter;
the unified service access interface is configured to receive the first service request sent by the first front-end application, place the first service request into the message queue, and sequentially feed back the processed first service processing result stored in the message queue to the corresponding first front-end application;
the core processing module is configured to take out the first service request from the message queue in sequence, and send the first service request to a corresponding protocol adapter according to a service code carried in the first service request;
and the protocol adapter is used for calling a corresponding service interface provided by the central application according to the first service request.
Preferably, the distributed server architecture further comprises: a high speed service framework;
the high speed service framework comprises: a service registry and a service engine;
the service registration center is used for storing the service addresses of the service interfaces corresponding to the center application and sending the service addresses of the service interfaces corresponding to the center application to the data exchange system;
and the service engine is used for receiving a calling instruction sent by the data exchange system, wherein the calling instruction carries a service address of a corresponding service interface, and the service engine calls the corresponding service interface according to the service address of the corresponding service interface carried in the calling instruction.
Preferably, further comprising: a second front-end application;
the second front-end application is arranged on the intranet side and used for receiving a second service request initiated by a user and sending the second service request to the central application;
the first front end application is arranged on the outer net side.
Preferably, further comprising: a front-end agent;
the front-end agent is used for storing a set load balancing strategy, receiving the first service request initiated by a user, and sending the first service request to a corresponding first front-end application according to the load balancing strategy.
Preferably, further comprising: a third party accesses the management module;
the third-party access management module is used for receiving a third service request sent by a third-party access channel, sending the third service request to the data exchange system, receiving a third service processing result fed back by the data exchange system, and feeding back the third service processing result through the corresponding third-party access channel;
the data exchange system is further configured to call a service interface corresponding to the central application according to the service type corresponding to the third service request, receive a third service processing result fed back by the corresponding service interface, and send the third service processing result to the third-party access management module.
A service method based on any one of the distributed server architectures, comprising:
displaying service data to a user by using the first front-end application, and receiving a first service request initiated by the user according to the displayed service data;
sending the first service request to the data exchange system by using the first front-end application, and calling a service interface corresponding to the central application by using the data exchange system according to the service type corresponding to the first service request;
processing the first service request by using the center application through the called service interface, and feeding back a processed first service processing result to the data exchange system;
and feeding back the first service processing result to a user through the first front-end application by using the data exchange system.
Preferably, the invoking, by the data exchange system, a service interface corresponding to the central application according to the service type corresponding to the first service request includes:
receiving the first service request sent by the first front-end application by using the same service access interface in the data exchange system, and putting the first service request into a message queue of the data exchange system;
the core processing module in the data exchange system is utilized to take out the first service request from the message queue in sequence, and the first service request is sent to a corresponding protocol adapter according to the service code carried in the first service request;
and calling a corresponding service interface provided by the central application according to the first service request by using the protocol adapter in the data exchange system.
Preferably, the first and second electrodes are formed of a metal,
further comprising: storing the service address of each service interface corresponding to the central application by using a service registration center in a high-speed service framework, and sending the service address of each service interface corresponding to the central application to the data exchange system;
the calling of the service interface corresponding to the central application comprises: sending a calling instruction to a service engine in the high-speed service framework by using the data exchange system according to the service address of each service interface corresponding to the central application; and calling the corresponding service interface by using the service engine according to the service address of the corresponding service interface carried in the calling instruction.
Preferably, the first and second electrodes are formed of a metal,
further comprising: receiving a second service request initiated by a user by utilizing the second front-end application arranged on the intranet side, and sending the second service request to the central application;
and/or the presence of a gas in the gas,
before the first front-end application is utilized to receive a first service request initiated by a user according to the displayed service data, the method further comprises the following steps: receiving the first service request initiated by a user by using the front-end agent, and sending the first service request to a corresponding first front-end application according to a stored load balancing strategy;
and/or the presence of a gas in the gas,
further comprising: receiving a third service request sent by a third-party access channel by using the third-party access management module, sending the third service request to the data exchange system, calling a service interface corresponding to the central application by using the data exchange system according to a service type corresponding to the third service request, and sending a third service processing result fed back by the corresponding service interface to the third-party access management module; and feeding back the third service processing result through a corresponding third party access channel by utilizing the third party access management module.
The embodiment of the invention provides a distributed server architecture and a service method thereof, wherein the server architecture is divided into a first front-end application, a data exchange system and a central application, the first front-end application receives a first service request initiated by a user, the data exchange system forwards the first service request, and the central application processes the first service request, so that the front-end and rear-end separation is realized, a larger number of services can be borne, and the probability of server downtime caused by a large number of services is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of a distributed server architecture according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another distributed server architecture provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of yet another distributed server architecture provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of yet another distributed server architecture provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of a distributed server architecture according to another embodiment of the present invention;
FIG. 6 is a schematic diagram of another distributed server architecture provided by another embodiment of the present invention;
fig. 7 is a flow chart of a service method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention, and based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without creative efforts belong to the scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a distributed server architecture, which may include the following: a first front-end application 101, a data exchange system 102 and a central application 103; wherein,
the first front-end application 101 is configured to display service data to a user, receive a first service request initiated according to the displayed service data, send the first service request to the data exchange system, and feed back a service processing result sent by the data exchange system 102 to the user;
the data exchange system 102 is configured to receive the first service request, invoke a service interface corresponding to the central application according to a service type corresponding to the first service request, receive a first service processing result fed back by the corresponding service interface, and send the first service processing result to the first front-end application 101;
the central application 103 is configured to provide service interfaces corresponding to different service types, perform service processing on the first service request by using the called service interface, and feed back the first service processing result to the data exchange system.
According to the embodiment, the server architecture is split into the first front-end application, the data exchange system and the central application, the first front-end application receives the first service request initiated by the user, the data exchange system forwards the first service request, and the central application processes the first service request, so that the front-end and back-end separation is realized, a larger number of services can be borne, and the probability of server downtime caused by a large number of services is reduced.
The first front-end application can be arranged at an external network side, the central application is arranged at an internal network side, and the data exchange system is used for realizing interconnection and intercommunication between the external network side and the internal network side.
The first front-end application is mainly used for performing service display, service receiving and verification on users on the external network side. Taking the distributed server architecture as an example of the tax system, the first front-end application mainly provides a page operation function for the taxpayer, and may include at least one of the following operation functions: information portal, single sign-on, user management, declaration service, application service and service receiving service.
In an embodiment of the present invention, in order to implement service differentiation between the external network side user and the internal network side user, please refer to fig. 2, the distributed server architecture may further include: a second front-end application 201;
the second front-end application is arranged on the intranet side and used for receiving a second service request initiated by a user and sending the second service request to the central application.
The second front-end application is mainly used for service display, service reception and verification of users on the internal network side. Taking the application of the distributed server architecture in the tax system as an example, the second front-end application may provide a page operation function for the staff in the tax bureau, and may include at least one of the following operation functions: working portal, single sign-on, user management, business processing, query statistics and system configuration functions.
For the first front-end application and the second front-end application, both may include: a display layer, a control layer, and a service delegation layer. The functions performed by these three layers will be described separately below.
1. A display layer.
Wherein the display layer may include: jsp, HTML, and web components. Its main functions may include: processing display logic, user interaction, and displaying a customer interface.
2. And controlling the layer.
The main functions of the control layer may include: sending requests, handling exceptions, handling hint messages, and steering functions to the central application.
3. And a service committee layer.
The main functions of the service delegation layer may include: and (4) realizing the verification of business logic, the calling arrangement of services and the conversion of messages (converting an object into a json and converting the json into the object).
In an embodiment of the present invention, in order to further reduce the probability of the server downtime caused by a large amount of traffic, and when the amount of traffic is large, the distributed server architecture can be ensured to operate normally, referring to fig. 3, the data exchange system 102 may include: a unified service access interface 1021, a message queue 1022, a core processing module 1023, and a protocol adapter 1024;
the unified service access interface 1021 is configured to receive the first service request sent by the first front-end application, place the first service request into the message queue 1022, and sequentially feed back the processed first service processing result stored in the message queue 1022 to the corresponding first front-end application;
the core processing module 1023 is configured to take out the first service requests from the message queue 1022 in sequence, and send the first service requests to corresponding protocol adapters 1024 according to service codes carried in the first service requests;
the protocol adapter 1024 is configured to call, according to the first service request, a corresponding service interface provided by the central application.
The unified Service access interface may provide access protocols such as HSF (distributed Service development framework), EJB (sun Java ee server component model), WS, JMS (Java Message Service, application program interface), and the like.
For the central application, business processing services are mainly provided for the first front-end application and the second front-end application. The central application can provide services based on HSF and EJB protocols externally, and data transmission is carried out between the services in a message mode. Also taking the application of the distributed server architecture in the tax system as an example, the central application may include a user center, a security center, a message center, a management center, a declaration center, an application center, a service receiving center, and a flow center.
Wherein the central application may include: the system comprises a central service interface layer, a business logic layer and a data access layer. The functions performed by these three layers will be described separately below.
1. A central service interface layer.
The central service interface layer is used for defining a service interface which is externally issued by the central application.
2. And a service logic layer.
The service logic layer may include: IService (inheriting a service interface of a service issued externally) and IService are realized, mainly business logic is realized, and multiplexing is facilitated. Its main functions may include: interaction of multiple daos, common interfaces to central internal modules, management of transactions, and implementation of business logic.
3. A data access layer.
The data access layer may include: IDao and IDao are implemented to take charge of database access.
In one embodiment of the invention, in order to directly and effectively integrate different internal application systems, a data transmission format can be standardized, and JSON is taken as a data exchange language in consideration of light weight and convenience.
JSON (JavaScript Object Notification) is a lightweight data exchange format. It is based on a subset of ECMAScript.
JSON employs a text format that is completely language independent, including C, C + +, C #, Java, JavaScript, Perl, Python, and the like. These properties make JSON an ideal data exchange language. The network transmission rate is improved in a larger layer because the network transmission rate is easy to read and write by people and easy to analyze and generate by machines.
The following specification may be performed for the request message, please refer to table 1:
table 1:
the following exemplary settings may be made for the specification of the request packet:
the following specification may be performed for the response packet, please refer to table 2:
table 2:
the following exemplary settings may be made for the specification of the response packet:
it should be noted that the specification of the above message is only an optional manner, and other message specifications may also be used in this embodiment.
In an embodiment of the present invention, in order to further ensure normal service of the distributed server architecture when the number of services is large, referring to fig. 4, the distributed server architecture may further include: a front-end agent 401;
the front-end agent 401 is configured to store a set load balancing policy, receive the first service request initiated by the user, and send the first service request to a corresponding first front-end application according to the load balancing policy.
Wherein, the nginx server can be adopted for the front-end agent, and is used for realizing the functions of session maintenance, load balancing and reverse agent. The specific configuration may include at least one of:
in an embodiment of the present invention, the distributed server architecture may further provide a third party access function, and therefore, referring to fig. 5, the distributed server architecture may further include: a third party accesses the management module 501;
the third-party access management module 501 is configured to receive a third service request sent by a third-party access channel, send the third service request to the data exchange system, receive a third service processing result fed back by the data exchange system, and feed back the third service processing result through a corresponding third-party access channel;
the data exchange system 102 is further configured to invoke a service interface corresponding to the central application according to the service type corresponding to the third service request, receive a third service processing result fed back by the corresponding service interface, and send the third service processing result to the third party access management module.
The third party access management module can issue the HSF and EJB services as restful services for the third party to call.
The third-party application can call the public service released by the central application through the third-party access management module. The third party calling protocol can adopt an RESTFUL mode and carry out access authentication through the access key and the SecretKey. The third party access management module provides a service protocol conversion function and converts HSF and EJB services into RESTFUL services.
The interface calling code of the third party access management module may include one of the following modes:
in an embodiment of the present invention, in order to increase the operation speed of the distributed server architecture, please refer to fig. 6, the distributed server architecture may further include: a high-speed service framework 601;
the high-speed service framework 601 includes: a service registry 6011 and a service engine 6012;
the service registration center 6011 is configured to store service addresses of the service interfaces corresponding to the center application, and send the service addresses of the service interfaces corresponding to the center application to the data exchange system;
the service engine 6012 is configured to receive a call instruction sent by the data exchange system, where the call instruction carries a service address of a corresponding service interface, and call the corresponding service interface according to the service address of the corresponding service interface carried in the call instruction.
The high-speed service framework can play a role of service governance, and all services realize other service governance functions such as publishing, calling, registering, subscribing, routing and the like through the high-speed service framework. The service provider issues the service address of the service provider to the high-speed service framework, and the service caller directly calls the service provided by the service provider after obtaining the actual service address from the high-speed service framework.
Under the action of a high-speed service framework, a system is generally divided into 3 blocks: web application, business service and basic service, wherein the Web application interacts with an end user and does not contain core business logic, and the core business logic is realized by business service related service. The basic service is mainly a service providing some common basic classes. The business service and the basic service can be realized by a J2EE application, can be realized by a net application, and can also be a common single-machine application.
Wherein, the service engine can also provide service publishing function. Service invokers and service providers employ long connections to improve performance.
The service registration center can also provide various unified service management functions, and most importantly, the service address subscribing/publishing function is provided, the service provider publishes the latest address to the service registration center, and the service caller acquires the latest service address in real time through subscription. The service registry may also provide service routing, service authorization, service lifecycle management.
The high-speed service framework can also comprise a service monitoring center which is used for collecting the data of the service engine operation for statistical analysis, monitoring the service operation condition and providing the operation data of the service to the service registration center to assist in service management.
The service caller and the service provider are completely decoupled, the two parties interact through a common service protocol, the two parties can be different technical platforms, the service caller can be J2EE/PHP/. NET, the service provider can be J2EE/. NET program, and the service provider does not need to be Web application.
The configuration for service publishing in the high-speed service framework may include one of the following:
<bean id="employee.netty"class="com.inspur.hsf.config.spring.proxy.SpringProviderBean">
<property
name="interfaceName"><value>org.loushang.demo.html.service.ILsEmployeeService</value></property>
<property name="target"><ref bean="lsEmployeeService"/></property>
<property
name="serviceName"><value>LsEmployeeService</value></property>
<property name="port"><value>2012</value></property>
<property name="protocol"><value>netty</value></property>
<property name="host"><value>localhost</value></property>
<property name="toRegistry"><value>true</value></property>
<property name="statistics"><value>false</value></property>
< Property name ═ serviceAlias "> < value > employee management >
</bean>
The attributes interfaceName and target are bound, and other attributes are optional. The interfaceName indicates an interface for issuing service, and the ref value of the target is the id of a spring bean actually corresponding to the interface.
When the serviceName attribute is not configured, the interface is used as the service name by default. If the service name is configured, uniqueness must be guaranteed.
The configuration for the service reference may include one of the following:
the attribute interfaceName is required to be filled, and other attributes are optional.
The interfaceName specifies the interface of the service.
The serviceName and version of the reference service must be the same as the serviceName configured by the service publisher.
Referring to fig. 7, an embodiment of the present invention further provides a service method based on the distributed service architecture according to any of the foregoing embodiments, where the method includes:
step 701: displaying service data to a user by using the first front-end application, and receiving a first service request initiated by the user according to the displayed service data;
step 702: sending the first service request to the data exchange system by using the first front-end application, and calling a service interface corresponding to the central application by using the data exchange system according to the service type corresponding to the first service request;
step 703: processing the first service request by using the center application through the called service interface, and feeding back a processed first service processing result to the data exchange system;
step 704: and feeding back the first service processing result to a user through the first front-end application by using the data exchange system.
In an embodiment of the present invention, in order to further reduce a probability of a server downtime caused by a large number of services, when the number of services is large, it can be ensured that a distributed server architecture normally operates, and the invoking, by using the data exchange system, a service interface corresponding to the central application according to the service type corresponding to the first service request includes:
receiving the first service request sent by the first front-end application by using the same service access interface in the data exchange system, and putting the first service request into a message queue of the data exchange system;
the core processing module in the data exchange system is utilized to take out the first service request from the message queue in sequence, and the first service request is sent to a corresponding protocol adapter according to the service code carried in the first service request;
and calling a corresponding service interface provided by the central application according to the first service request by using the protocol adapter in the data exchange system.
In an embodiment of the present invention, to increase the operation speed of the distributed server architecture, the method may further include: storing the service address of each service interface corresponding to the central application by using a service registration center in a high-speed service framework, and sending the service address of each service interface corresponding to the central application to the data exchange system;
the calling of the service interface corresponding to the central application comprises: sending a calling instruction to a service engine in the high-speed service framework by using the data exchange system according to the service address of each service interface corresponding to the central application; and calling the corresponding service interface by using the service engine according to the service address of the corresponding service interface carried in the calling instruction.
In an embodiment of the present invention, in order to implement that the distributed server architecture can provide service for users on the intranet side, the method may further include: receiving a second service request initiated by a user by utilizing the second front-end application arranged on the intranet side, and sending the second service request to the central application;
in an embodiment of the present invention, to implement load balancing, before receiving, by using the first front-end application, a first service request initiated by a user according to the presented service data, the method further includes: receiving the first service request initiated by a user by using the front-end agent, and sending the first service request to a corresponding first front-end application according to a stored load balancing strategy;
in an embodiment of the present invention, to implement that the distributed server architecture can provide a business service for a third party, the method may further include: receiving a third service request sent by a third-party access channel by using the third-party access management module, sending the third service request to the data exchange system, calling a service interface corresponding to the central application by using the data exchange system according to a service type corresponding to the third service request, and sending a third service processing result fed back by the corresponding service interface to the third-party access management module; and feeding back the third service processing result through a corresponding third party access channel by utilizing the third party access management module.
In summary, the embodiments of the present invention have at least the following advantages:
1. in the embodiment of the invention, the server architecture is divided into the first front-end application, the data exchange system and the central application, the first front-end application receives the first service request initiated by the user, the data exchange system forwards the first service request, and the central application processes the first service request, so that the front-end and the rear-end separation is realized, a larger number of services can be borne, and the probability of the downtime of the server caused by the large number of services is reduced.
2. In the embodiment of the invention, the second front-end application is arranged in the distributed server architecture and is arranged on the intranet side, so that the distributed server architecture can provide business services for users on the intranet side, and the application function of the distributed server architecture is improved.
3. In the embodiment of the invention, the received service requests are queued by using the message queue of the data exchange system, so that the normal operation of a distributed server architecture in large service quantity can be ensured, and the probability of crash of the server is reduced.
4. In the embodiment of the invention, the third party access management module is realized in the distributed server architecture, so that the corresponding service function can be provided for the third party.
5. In the embodiment of the invention, the high-speed service framework is utilized to provide services such as publishing, calling, registering, subscribing, routing and the like of the service address, so that the operation efficiency of the distributed server framework can be improved, and the normal service processing of the distributed server framework is ensured.
Because the information interaction, execution process, and other contents between the units in the device are based on the same concept as the method embodiment of the present invention, specific contents may refer to the description in the method embodiment of the present invention, and are not described herein again.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a" does not exclude the presence of other similar elements in a process, method, article, or apparatus that comprises the element.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it is to be noted that: the above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A distributed server architecture, comprising: a first front-end application, a data exchange system and a central application; wherein,
the first front-end application is used for displaying service data to a user, receiving a first service request initiated according to the displayed service data, sending the first service request to the data exchange system, and feeding back a service processing result sent by the data exchange system to the user;
the data exchange system is configured to receive the first service request, call a service interface corresponding to the central application according to a service type corresponding to the first service request, receive a first service processing result fed back by the corresponding service interface, and send the first service processing result to the first front-end application;
the central application is configured to provide service interfaces corresponding to different service types, perform service processing on the first service request by using the called service interface, and feed back a result of the first service processing to the data exchange system.
2. The distributed server architecture of claim 1, wherein the data exchange system comprises: the system comprises a unified service access interface, a message queue, a core processing module and a protocol adapter;
the unified service access interface is configured to receive the first service request sent by the first front-end application, place the first service request into the message queue, and sequentially feed back the processed first service processing result stored in the message queue to the corresponding first front-end application;
the core processing module is configured to take out the first service request from the message queue in sequence, and send the first service request to a corresponding protocol adapter according to a service code carried in the first service request;
and the protocol adapter is used for calling a corresponding service interface provided by the central application according to the first service request.
3. The distributed server architecture of claim 1 or 2, further comprising: a high speed service framework;
the high speed service framework comprises: a service registry and a service engine;
the service registration center is used for storing the service addresses of the service interfaces corresponding to the center application and sending the service addresses of the service interfaces corresponding to the center application to the data exchange system;
and the service engine is used for receiving a calling instruction sent by the data exchange system, wherein the calling instruction carries a service address of a corresponding service interface, and the service engine calls the corresponding service interface according to the service address of the corresponding service interface carried in the calling instruction.
4. The distributed server architecture of claim 1, further comprising: a second front-end application;
the second front-end application is arranged on the intranet side and used for receiving a second service request initiated by a user and sending the second service request to the central application;
the first front end application is arranged on the outer net side.
5. The distributed server architecture of claim 1, further comprising: a front-end agent;
the front-end agent is used for storing a set load balancing strategy, receiving the first service request initiated by a user, and sending the first service request to a corresponding first front-end application according to the load balancing strategy.
6. The distributed server architecture of claim 1, further comprising: a third party accesses the management module;
the third-party access management module is used for receiving a third service request sent by a third-party access channel, sending the third service request to the data exchange system, receiving a third service processing result fed back by the data exchange system, and feeding back the third service processing result through the corresponding third-party access channel;
the data exchange system is further configured to call a service interface corresponding to the central application according to the service type corresponding to the third service request, receive a third service processing result fed back by the corresponding service interface, and send the third service processing result to the third-party access management module.
7. A service method based on the distributed server architecture according to any one of claims 1 to 6, comprising:
displaying service data to a user by using the first front-end application, and receiving a first service request initiated by the user according to the displayed service data;
sending the first service request to the data exchange system by using the first front-end application, and calling a service interface corresponding to the central application by using the data exchange system according to the service type corresponding to the first service request;
processing the first service request by using the center application through the called service interface, and feeding back a processed first service processing result to the data exchange system;
and feeding back the first service processing result to a user through the first front-end application by using the data exchange system.
8. The service method according to claim 7, wherein the using the data exchange system to invoke the service interface corresponding to the central application according to the service category corresponding to the first service request includes:
receiving the first service request sent by the first front-end application by using the same service access interface in the data exchange system, and putting the first service request into a message queue of the data exchange system;
the core processing module in the data exchange system is utilized to take out the first service request from the message queue in sequence, and the first service request is sent to a corresponding protocol adapter according to the service code carried in the first service request;
and calling a corresponding service interface provided by the central application according to the first service request by using the protocol adapter in the data exchange system.
9. Service method according to claim 7 or 8,
further comprising: storing the service address of each service interface corresponding to the central application by using a service registration center in a high-speed service framework, and sending the service address of each service interface corresponding to the central application to the data exchange system;
the calling of the service interface corresponding to the central application comprises: sending a calling instruction to a service engine in the high-speed service framework by using the data exchange system according to the service address of each service interface corresponding to the central application; and calling the corresponding service interface by using the service engine according to the service address of the corresponding service interface carried in the calling instruction.
10. The service method according to claim 7,
further comprising: receiving a second service request initiated by a user by utilizing the second front-end application arranged on the intranet side, and sending the second service request to the central application;
and/or the presence of a gas in the gas,
before the first front-end application is utilized to receive a first service request initiated by a user according to the displayed service data, the method further comprises the following steps: receiving the first service request initiated by a user by using the front-end agent, and sending the first service request to a corresponding first front-end application according to a stored load balancing strategy;
and/or the presence of a gas in the gas,
further comprising: receiving a third service request sent by a third-party access channel by using the third-party access management module, sending the third service request to the data exchange system, calling a service interface corresponding to the central application by using the data exchange system according to a service type corresponding to the third service request, and sending a third service processing result fed back by the corresponding service interface to the third-party access management module; and feeding back the third service processing result through a corresponding third party access channel by utilizing the third party access management module.
CN201610865425.8A 2016-09-29 2016-09-29 Distributed server configuration and service method thereof Pending CN106657232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610865425.8A CN106657232A (en) 2016-09-29 2016-09-29 Distributed server configuration and service method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610865425.8A CN106657232A (en) 2016-09-29 2016-09-29 Distributed server configuration and service method thereof

Publications (1)

Publication Number Publication Date
CN106657232A true CN106657232A (en) 2017-05-10

Family

ID=58854115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610865425.8A Pending CN106657232A (en) 2016-09-29 2016-09-29 Distributed server configuration and service method thereof

Country Status (1)

Country Link
CN (1) CN106657232A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107147651A (en) * 2017-05-18 2017-09-08 深圳房讯通信息技术有限公司 A kind of gray scale delivery system and its dissemination method
CN107463434A (en) * 2017-08-11 2017-12-12 恒丰银行股份有限公司 Distributed task processing method and device
CN107465726A (en) * 2017-07-20 2017-12-12 腾讯科技(深圳)有限公司 Resource regulating method and device
CN107608804A (en) * 2017-09-21 2018-01-19 山东浪潮云服务信息科技有限公司 A kind of task processing system and method
CN108038009A (en) * 2017-12-22 2018-05-15 金蝶软件(中国)有限公司 Front and back end exchange method, device and computer equipment based on Web applications
CN108628599A (en) * 2018-05-03 2018-10-09 山东浪潮通软信息科技有限公司 A kind of method, apparatus and system of adaptable interface service
CN108768727A (en) * 2018-05-31 2018-11-06 康键信息技术(深圳)有限公司 Access method, electronic device and the readable storage medium storing program for executing of third party's service
CN108933807A (en) * 2017-05-27 2018-12-04 广州市呼百应网络技术股份有限公司 A kind of layer-stepping project service platform
CN109343829A (en) * 2018-08-09 2019-02-15 广州瀚信通信科技股份有限公司 Frame is administered in a kind of service of declining of java language distribution
CN109587280A (en) * 2019-01-21 2019-04-05 山东达创网络科技股份有限公司 A kind of Business Process Management method and device
CN109726593A (en) * 2018-12-31 2019-05-07 联动优势科技有限公司 A method and device for realizing data sandbox
CN109739654A (en) * 2018-08-10 2019-05-10 比亚迪股份有限公司 Message-oriented middleware and method for message transmission
CN110138753A (en) * 2019-04-26 2019-08-16 中国工商银行股份有限公司 Distributed message service system, method, equipment and computer readable storage medium
CN110572478A (en) * 2019-09-30 2019-12-13 重庆紫光华山智安科技有限公司 Data transmission method and system based on distributed architecture service and FTP service
CN110839001A (en) * 2018-08-15 2020-02-25 中国移动通信集团重庆有限公司 Apparatus, method, apparatus and medium for processing batch files
CN111240647A (en) * 2020-01-15 2020-06-05 海南新软软件有限公司 Digital asset transaction middling product architecture
CN112131017A (en) * 2020-09-15 2020-12-25 北京值得买科技股份有限公司 Interface design method for calendar service
CN114422169A (en) * 2021-12-07 2022-04-29 中国科学院国家授时中心 An internal and external network data display system and display method based on WCF technology

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1968283A (en) * 2006-05-12 2007-05-23 华为技术有限公司 Network management system and method
CN105681463A (en) * 2016-03-14 2016-06-15 浪潮软件股份有限公司 Distributed service framework and distributed service calling system
CN105827671A (en) * 2015-01-04 2016-08-03 深圳市领耀东方科技股份有限公司 System platform characterized by distributed use and centralized management and portal server

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1968283A (en) * 2006-05-12 2007-05-23 华为技术有限公司 Network management system and method
CN105827671A (en) * 2015-01-04 2016-08-03 深圳市领耀东方科技股份有限公司 System platform characterized by distributed use and centralized management and portal server
CN105681463A (en) * 2016-03-14 2016-06-15 浪潮软件股份有限公司 Distributed service framework and distributed service calling system

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107147651A (en) * 2017-05-18 2017-09-08 深圳房讯通信息技术有限公司 A kind of gray scale delivery system and its dissemination method
CN107147651B (en) * 2017-05-18 2020-07-31 深圳房讯通信息技术有限公司 Gray level publishing system and publishing method thereof
CN108933807A (en) * 2017-05-27 2018-12-04 广州市呼百应网络技术股份有限公司 A kind of layer-stepping project service platform
CN107465726A (en) * 2017-07-20 2017-12-12 腾讯科技(深圳)有限公司 Resource regulating method and device
CN107463434A (en) * 2017-08-11 2017-12-12 恒丰银行股份有限公司 Distributed task processing method and device
CN107608804A (en) * 2017-09-21 2018-01-19 山东浪潮云服务信息科技有限公司 A kind of task processing system and method
CN108038009A (en) * 2017-12-22 2018-05-15 金蝶软件(中国)有限公司 Front and back end exchange method, device and computer equipment based on Web applications
CN108628599A (en) * 2018-05-03 2018-10-09 山东浪潮通软信息科技有限公司 A kind of method, apparatus and system of adaptable interface service
CN108768727A (en) * 2018-05-31 2018-11-06 康键信息技术(深圳)有限公司 Access method, electronic device and the readable storage medium storing program for executing of third party's service
CN108768727B (en) * 2018-05-31 2023-03-31 康键信息技术(深圳)有限公司 Method for accessing third-party service, electronic device and readable storage medium
CN109343829A (en) * 2018-08-09 2019-02-15 广州瀚信通信科技股份有限公司 Frame is administered in a kind of service of declining of java language distribution
CN109739654A (en) * 2018-08-10 2019-05-10 比亚迪股份有限公司 Message-oriented middleware and method for message transmission
CN110839001A (en) * 2018-08-15 2020-02-25 中国移动通信集团重庆有限公司 Apparatus, method, apparatus and medium for processing batch files
CN109726593A (en) * 2018-12-31 2019-05-07 联动优势科技有限公司 A method and device for realizing data sandbox
CN109726593B (en) * 2018-12-31 2021-02-23 联动优势科技有限公司 A method and device for realizing data sandbox
CN109587280A (en) * 2019-01-21 2019-04-05 山东达创网络科技股份有限公司 A kind of Business Process Management method and device
CN110138753A (en) * 2019-04-26 2019-08-16 中国工商银行股份有限公司 Distributed message service system, method, equipment and computer readable storage medium
CN110572478A (en) * 2019-09-30 2019-12-13 重庆紫光华山智安科技有限公司 Data transmission method and system based on distributed architecture service and FTP service
CN111240647A (en) * 2020-01-15 2020-06-05 海南新软软件有限公司 Digital asset transaction middling product architecture
CN112131017A (en) * 2020-09-15 2020-12-25 北京值得买科技股份有限公司 Interface design method for calendar service
CN114422169A (en) * 2021-12-07 2022-04-29 中国科学院国家授时中心 An internal and external network data display system and display method based on WCF technology

Similar Documents

Publication Publication Date Title
CN106657232A (en) Distributed server configuration and service method thereof
US11403684B2 (en) System, manufacture, and method for performing transactions similar to previous transactions
US10872000B2 (en) Late connection binding for bots
US11070626B2 (en) Managing messages sent between services
US7088995B2 (en) Common service platform and software
AU2006233229B2 (en) Service broker integration layer for supporting telecommunication client service requests
AU2002322282C1 (en) Integrating enterprise support systems
US8023927B1 (en) Abuse-resistant method of registering user accounts with an online service
US20170148021A1 (en) Homogenization of online flows and backend processes
CN1980243B (en) Method and system for supporting telecommunications customer service requests
US11055754B1 (en) Alert event platform
CN103098433A (en) SERVLET API and method for XMPP protocol
US20100082737A1 (en) Dynamic service routing
WO2014165967A1 (en) Method and system for managing cloud portals, and billing system therefor
US9652309B2 (en) Mediator with interleaved static and dynamic routing
US8369504B2 (en) Cross channel contact history management
CN109492985A (en) A kind of checking method, apparatus and system
CN104240070A (en) Data release service system and method
CN101771724B (en) Heterogeneous distributed information integration method, device and system
US8510707B1 (en) Mainframe-based web service development accelerator
US20040255006A1 (en) System and method of handling a web service call
US20020188666A1 (en) Lightweight dynamic service conversation controller
CN105678613A (en) A business processing method, device and system
CN102054213A (en) Information integration method, device and system
CN110827001A (en) Accounting event bookkeeping method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170510