Disclosure of Invention
In view of this, the embodiments of the present invention provide a service request processing method, device and system, and a service configuration method and device, which can decouple an algorithm model and service logic, reduce development cost, and improve service processing efficiency.
In a first aspect, an embodiment of the present invention provides a service configuration method, including:
acquiring a configuration file of a service;
Determining an interface of the service and identification information corresponding to the interface according to the configuration file, wherein the interface comprises an identification of a container, and an algorithm model is deployed in the container;
And correspondingly storing the interface and the identification information in a configuration database.
Preferably, the method comprises the steps of,
The service acquiring configuration file comprises the following steps:
Monitoring a service discovery system;
and if the configuration file in the service discovery system is changed, acquiring the configuration file from the service discovery system.
Preferably, the method comprises the steps of,
The step of determining the interface of the service and the identification information corresponding to the interface according to the configuration file comprises the following steps:
analyzing the configuration file to obtain the identification information, and operation flow information and operation unit information corresponding to the identification information;
Creating an operation unit according to the operation unit information, wherein the operation unit comprises an identifier of the container;
And according to the operation flow information, packaging the operation unit into the interface.
In a second aspect, an embodiment of the present invention provides a service request processing method, including:
Receiving a service request sent by a service party, wherein the service request comprises parameters and identification information corresponding to an interface;
acquiring the interface from a configuration database according to the identification information, wherein the interface comprises an identification of a container, wherein an algorithm model is deployed in the container;
Transmitting the parameters into the interface so that the interface transmits the parameters to the container according to the identification of the container and outputs a calling result according to a calculation result fed back by the container, wherein the calculation result is determined by the container according to the parameters and the algorithm model;
And determining a request result according to the calling result, and feeding back the request result to the service party.
Preferably, the method comprises the steps of,
The step of determining the request result according to the calling result comprises the following steps:
Storing the calling result to the configuration database;
And sending the calling result stored in the configuration database to a vector database according to a data exchange tool, and receiving the request result fed back by the vector database.
Preferably, the method comprises the steps of,
The interface comprises n operation units, wherein n is an integer greater than or equal to 2, and the operation units have corresponding execution bit times;
the interface sends the parameters to the container according to the identification of the container, and outputs a calling result according to the calculation result fed back by the container, comprising:
The operation unit with the execution level arranged at the 1 st position sends the parameter to the 1 st container according to the identification of the 1 st container included by the operation unit, and the calculation result fed back by the 1 st container is input to the operation unit with the execution level arranged at the 2 nd position;
The operation unit with the execution bit arranged in the ith position sends the calculation result input by the operation unit with the execution bit arranged in the ith-1 position to the ith container according to the identification of the ith container included by the operation unit with the execution bit arranged in the ith position, and the calculation result fed back by the ith container is input to the operation unit with the execution bit arranged in the ith-1 position;
The operation unit with the execution bit arranged at the nth bit transmits the calculation result input by the operation unit with the execution bit arranged at the nth-1 bit to the nth container according to the identification of the nth container included by the operation unit, and outputs the calculation result fed back by the nth container, wherein the calculation result fed back by the nth container is the calling result, and i is an integer greater than 1 and less than n.
In a third aspect, an embodiment of the present invention provides a service configuration apparatus, including:
the acquisition module is configured to acquire a configuration file of a service;
The determining module is configured to determine an interface of the service and identification information corresponding to the interface according to the configuration file, wherein the interface comprises an identification of a container, and an algorithm model is deployed in the container;
and the storage module is configured to correspondingly store the interface and the identification information in a configuration database.
In a fourth aspect, an embodiment of the present invention provides a service request processing apparatus, including:
The receiving module is configured to receive a service request sent by a service party, wherein the service request comprises parameters and identification information corresponding to an interface;
the acquisition module is configured to acquire the interface from a configuration database according to the identification information, wherein the interface comprises an identification of a container, and an algorithm model is deployed in the container;
The calling module is configured to transmit the parameters to the interface so that the interface can send the parameters to the container according to the identification of the container and output a calling result according to a calculation result fed back by the container, wherein the calculation result is determined by the container according to the parameters and the algorithm model;
and the determining module is configured to determine a request result according to the calling result and feed back the request result to the service party.
In a fifth aspect, an embodiment of the present invention provides a service request processing system, including the service configuration device described in the foregoing embodiment and the service request processing device described in the foregoing embodiment.
In a sixth aspect, an embodiment of the present invention provides an electronic device, including:
one or more processors;
storage means for storing one or more programs,
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the methods of any of the embodiments described above.
In a seventh aspect, embodiments of the present invention provide a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the above embodiments.
One embodiment of the invention has the advantage or beneficial effect that dynamic configuration of the service can be realized through the configuration file. The algorithm model is separated from the configuration file, so that the algorithm model is decoupled from the business logic, and when the algorithm model corresponding to the business changes, the code does not need to be re-developed, the development cost is reduced, and the business request processing efficiency is improved.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As shown in fig. 1, an embodiment of the present invention provides a service configuration method, including:
and 101, acquiring a configuration file of the service.
Different business services can be provided through configuration files of different businesses. Such as payment services, image recognition services, image feature extraction services, etc.
The configuration file of the service may be uploaded by a provider of the service or may be obtained from a service discovery system.
Step 102, determining interfaces of the service and identification information corresponding to the interfaces according to the configuration file, wherein the interfaces comprise identifications of containers, and algorithm models are deployed in the containers.
The identification information corresponding to the interface may include an identification of the interface and an identification of a service to which the interface belongs. For example, the identifier of the registration interface is a, the service identifier to which the registration interface belongs is a, and the identifier information corresponding to the registration interface is a and a. In the process of calling the interface, the called interface can be accurately determined through the identification information corresponding to the interface.
The embodiment of the invention associates the algorithm model with the interface through the identification of the container. An interface may include an identification of one or more containers. The identification of the container may be a domain name to which the container corresponds. The container can be packaged into interface service through mirror image container, and provides corresponding algorithm service to the outside. The container is mounted under the domain name by mirroring the IP address of the container. One domain name can correspond to one or more mirror image containers corresponding to one or more containers, and when the calculation amount of one container is insufficient, the IP addresses of the mirror image containers of other containers can be mounted under the same domain name for load balancing. By adding the mirror image container of the mounted container under the domain name, the service request with larger operand can be processed, and the service processing capability is improved.
The algorithm model may be a visual target tracking algorithm, a regression algorithm model, or the like.
Step 103, correspondingly storing the interface and the identification information in a configuration database.
And correspondingly storing the interfaces and the corresponding identification information thereof so as to realize the calling of the corresponding interfaces according to the identification information.
The method can realize the dynamic configuration of the service through the configuration file. The algorithm model is separated from the configuration file, so that the algorithm model is decoupled from the business logic, and when the algorithm model corresponding to the business changes, the code does not need to be re-developed, so that the development cost is reduced.
In one embodiment of the present invention, obtaining a configuration file of a service includes:
Monitoring a service discovery system;
If the configuration file in the service discovery system is changed, the configuration file is acquired from the service discovery system.
The service discovery system may be ETCD, consul, etc. The ETCD is a distributed and consistent key-value storage system for sharing configuration and service discovery, and Consul is service management software for supporting distributed high-availability service discovery and configuration sharing under a plurality of data centers.
Taking ETCD as an example, the service discovery system may include one or more servers with ETCD built therein, where an ETCD service cluster formed by multiple servers has higher stability and reliability compared with a single server.
In the embodiment of the invention, the change of the configuration file refers to the modification and the addition of the configuration file, if the configuration file in the service discovery system is modified, the modified configuration file is obtained from the service discovery system, and if the configuration file in the service discovery system is newly added, the newly added configuration file is obtained from the service discovery system. In an actual application scene, if the deletion of the configuration file in the service discovery system is monitored, deleting the interface corresponding to the configuration file and the identification information corresponding to the interface from the configuration database according to the identification of the deleted configuration file.
By monitoring the service discovery system, the information of the configuration file change can be timely obtained, and further configured services can be timely adjusted.
In one embodiment of the present invention, determining, according to a configuration file, an interface of a service and identification information corresponding to the interface includes:
Analyzing the configuration file to obtain identification information, and operation flow information and operation unit information corresponding to the identification information;
and creating an operation unit according to the operation unit information, wherein the operation unit comprises an identification of the container.
And according to the operation flow information, the operation unit is packaged into an interface.
One configuration file may generate at least one interface, one interface including at least one operation unit. For example, service 1 corresponds to a search interface comprising operation units of detection, quality, characteristics and matching, service 2 corresponds to a registration interface and a search interface, wherein the search interface is the same as the search interface of service 1, and the registration interface comprises operation units of detection, living body, characteristics and identification.
An operation unit corresponds to an algorithm model, for example, a detection unit corresponds to a detection algorithm model, a recognition unit corresponds to a recognition algorithm model, and an attribute unit corresponds to an attribute algorithm model. The operation flow information includes information such as a logical sequence of each operation unit in the interface, for example, an output of the detection unit is an input of the quality unit, and an output of the quality unit is an input of the identification unit.
As shown in fig. 2, a schematic diagram describing the relationship among the operation units, interfaces, and services is shown. The service comprises two interfaces, interface 1 and interface 2, interface 1 comprises three operation units, and interface 2 comprises two operation units.
In one embodiment of the invention, an algorithm model is deployed in a container, comprising:
The algorithm model is deployed in the container through TensorFlow Serving.
TensorFlow Serving is an open source service system, which is suitable for deploying machine learning models, flexible, high in performance and applicable to production environments.
As shown in fig. 3, a schematic diagram of a container is provided, the container includes TensorFlow model, tensorFlow Serving, and Client module, the Client module further includes a prediction sub-module, a preprocessing sub-module, and a service sub-module, wherein the service sub-module is configured to interact with an operation unit. The preprocessing sub-module is used for converting images, texts or other data sent by the operation unit into tensors. The prediction submodule sends the tensor to TensorFlowServing based on gRPC. gRPC is a high-performance, generic open source RPC (Remote Procedure Call ) framework.
As shown in fig. 4, an embodiment of the present invention provides a service request processing method, including:
step 401, receiving a service request sent by a service party, wherein the service request comprises parameters and identification information corresponding to an interface.
The business party may be a business system that requires an interface to be invoked. The parameters may be data such as pictures, text, etc. The service party can call a plurality of interfaces through one service request, and the called interfaces can belong to one service or different services.
For example, the identification information corresponding to the service request 1 includes a service a and an interface a, and the corresponding parameter is a picture 1, so that the call to the interface a of the service a can be realized through the service request 1, and the picture 1 is an entry parameter of the interface. The identification information corresponding to the service request 2 comprises a service A, an interface a corresponding to the service A, a service B and an interface B corresponding to the service B, wherein parameters corresponding to the interface a are pictures 1, and parameters corresponding to the interface B are pictures 2. The call to the interface a of the service A and the interface B of the service B can be realized through the service request 2, and the incoming parameters are the picture 1 and the picture 2 respectively.
Step 402, acquiring an interface from a configuration database according to the identification information, wherein the interface comprises an identification of a container, and an algorithm model is deployed in the container.
The relationship between the service, the interface and the operation unit has been described in the foregoing embodiments, and will not be described herein.
Step 403, the parameters are transmitted into the interface, so that the interface sends the parameters to the container according to the identification of the container, and the calling result is output according to the calculation result fed back by the container, wherein the calculation result is determined by the container according to the parameters and the algorithm model.
The interface may include only one operation unit, and at this time, the output result of the operation unit is the calling result. The interface (or operation unit) interacts with the container based on HTTP (Hyper Text TransferProtocol ) or gRPC.
And step 404, determining a request result according to the calling result, and feeding back the request result to the service party.
In an actual application scenario, according to different services, call results output by some interfaces can be directly used as request results, and call results output by some interfaces need to be further processed to obtain the request results, and specific cases will be described in subsequent embodiments.
In the service request processing process, a corresponding algorithm model can be called for calculation according to the identification of the container included in the interface, and if the algorithm model corresponding to the interface is changed, only the identification of the container in the interface is required to be changed without re-writing codes. The method can improve the processing efficiency of the service request.
In one embodiment of the present invention, determining the request result based on the call result includes:
storing the calling result to a configuration database;
And sending the calling result stored in the configuration database to the vector database according to the data exchange tool, and receiving the request result fed back by the vector database.
Taking an image recognition service as an example, calling the feature of the image to be recognized, which is obtained by extraction, storing the feature of the template image in a vector database, and exchanging the feature input of the image to be recognized into the vector database, wherein the vector database obtains a recognition result according to the feature of the image to be recognized and the feature of the template image, and the recognition result is a request result.
In one embodiment of the invention, the interface comprises n operation units, wherein n is an integer greater than or equal to 2, and the operation units have corresponding execution bit times;
The interface sends the parameters to the container according to the identification of the container, and outputs a calling result according to the calculation result fed back by the container, comprising:
The operation unit for executing the rank-1 according to the identification of the 1 st container contained in the operation unit, sending the parameter to the 1 st container, and inputting the calculation result fed back by the 1 st container into the operation unit for executing the rank-2;
the operation unit with the execution bit arranged in the ith position sends the calculation result input by the operation unit with the execution bit arranged in the ith-1 position to the ith container according to the identification of the ith container included by the operation unit, and the calculation result fed back by the ith container is input to the operation unit with the execution bit arranged in the (i+1) th position;
The operation unit with the execution bit arranged at the n-th bit sends the calculation result input by the operation unit with the execution bit arranged at the n-1 th bit to the n-th container according to the identification of the n-th container included by the operation unit, and outputs the calculation result fed back by the n-th container, wherein the calculation result fed back by the n-th container is a calling result, and i is an integer greater than 1 and less than n.
Taking a search interface as an example, the search interface comprises operation units of detection, quality, characteristics and matching. The execution orders of the operation units are 1,2,3 and 4 respectively. The method comprises the steps of sending parameters in a service request to a detection container, inputting calculation results fed back by the detection container to a quality unit, sending the calculation results input by the detection unit to a quality container, inputting the calculation results fed back by the quality container to a feature unit, sending the calculation results input by the quality unit to a feature container, inputting the calculation results fed back by the feature container to a matching unit, sending the calculation results input by the feature unit to the matching container, and outputting the calculation results fed back by the matching container.
In one embodiment of the invention, the method further comprises determining the time that the container spends from receiving data sent by the respective operation unit to outputting the calculation result, and saving the time. The time corresponding to the container reflects the performance strength of the container, and the change of the performance of the container can be monitored through the time.
In one embodiment of the invention, the method further comprises sending an alarm signal if the operation unit fails to connect with the algorithm model. The alarm signal can be an alarm short message or other forms such as an alarm mail. The object to be transmitted may be a service party or a maintainer of the service request processing apparatus.
In one embodiment of the invention, the method further comprises generating a log of the operation unit from the input and output of the operation unit, and saving the log. Maintaining the log facilitates subsequent checking of the operation of the operating unit.
As shown in fig. 5, an embodiment of the present invention provides a service request processing method, including:
step 501, listening to a service discovery system.
Step 502, when a configuration file is newly added in the service discovery system, the configuration file is obtained from the service discovery system.
And 503, analyzing the configuration file to obtain the identification information, and the operation flow information and the operation unit information corresponding to the identification information.
And step 504, creating an operation unit according to the operation unit information, wherein the operation unit comprises an identification of the container.
And 505, packaging the operation unit into an interface according to the operation flow information, wherein the interface comprises an identifier of a container, and an algorithm model is deployed in the container.
Step 506, correspondingly storing the interface and the identification information in a configuration database.
Step 507, receiving a service request sent by a service party, wherein the service request comprises identification information corresponding to parameters and a target interface.
Step 508, obtaining a target interface from the configuration database according to the identification information, wherein the target interface comprises an identification of a container, and an algorithm model is deployed in the container.
Step 509, transmitting the parameters into a target interface, so that the target interface transmits the parameters to the container according to the identification of the container, and outputting a calling result according to a calculation result fed back by the container, wherein the calculation result is determined by the container according to the parameters and an algorithm model.
Step 510, storing the calling result in the configuration database.
Step 511, sending the calling result stored in the configuration database to the vector database according to the data exchange tool, and receiving the request result fed back by the vector database.
And step 512, feeding back the request result to the service party.
The method decouples the business logic from the algorithm model, can realize the dynamic configuration of the business, reduces the development cost and improves the processing efficiency of the business request.
As shown in fig. 6, an embodiment of the present invention provides a service configuration apparatus, including:
An obtaining module 601, configured to obtain a configuration file of a service;
the determining module 602 is configured to determine an interface of the service and identification information corresponding to the interface according to the configuration file, wherein the interface comprises an identification of a container;
the storage module 603 is configured to store the interface and the identification information in a configuration database correspondingly.
In one embodiment of the present invention, the obtaining module 601 is configured to monitor the service discovery system, and obtain the configuration file from the service discovery system if the configuration file in the service discovery system is changed.
In one embodiment of the present invention, the determining module 602 is configured to parse the configuration file to obtain the identification information, the operation flow information and the operation unit information corresponding to the identification information, and create the operation unit according to the operation unit information, where the operation unit includes an identification of a container, and encapsulate the operation unit into an interface according to the operation flow information.
In one embodiment of the invention, the container has an algorithm model deployed therein, including the algorithm model being deployed in the container by TensorFlow Serving.
As shown in fig. 7, an embodiment of the present invention provides a service request processing apparatus, including:
The receiving module 701 is configured to receive a service request sent by a service party, wherein the service request comprises parameters and identification information corresponding to an interface;
the acquisition module 702 is configured to acquire an interface from the configuration database according to the identification information, wherein the interface comprises an identification of a container;
The calling module 703 is configured to transmit the parameters to the interface, so that the interface sends the parameters to the container according to the identification of the container, and outputs a calling result according to a calculation result fed back by the container, wherein the calculation result is determined by the container according to the parameters and the algorithm model;
and the determining module 704 is configured to determine a request result according to the calling result and feed back the request result to the service party.
In one embodiment of the invention, the determining module 704 is configured to store the calling result in the configuration database, send the calling result stored in the configuration database to the vector database according to the data exchange tool, and receive the request result fed back by the vector database.
In one embodiment of the invention, the interface comprises n operation units, wherein n is an integer greater than or equal to 2, and the operation units have corresponding execution bit times;
The interface sends the parameters to the container according to the identification of the container, and outputs a calling result according to the calculation result fed back by the container, comprising:
The operation unit for executing the rank-1 according to the identification of the 1 st container contained in the operation unit, sending the parameter to the 1 st container, and inputting the calculation result fed back by the 1 st container into the operation unit for executing the rank-2;
the operation unit with the execution bit arranged in the ith position sends the calculation result input by the operation unit with the execution bit arranged in the ith-1 position to the ith container according to the identification of the ith container included by the operation unit, and the calculation result fed back by the ith container is input to the operation unit with the execution bit arranged in the (i+1) th position;
The operation unit with the execution bit arranged at the n-th bit sends the calculation result input by the operation unit with the execution bit arranged at the n-1 th bit to the n-th container according to the identification of the n-th container included by the operation unit, and outputs the calculation result fed back by the n-th container, wherein the calculation result fed back by the n-th container is a calling result, and i is an integer greater than 1 and less than n.
The embodiment of the invention provides a service request processing system, which comprises the service configuration device and the service request processing device.
The embodiment of the invention provides electronic equipment, which comprises:
one or more processors;
storage means for storing one or more programs,
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the methods of any of the embodiments described above.
An embodiment of the present invention provides a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the above embodiments.
Fig. 8 shows an exemplary system architecture 800 to which a service configuration method or a service request processing method or a service configuration apparatus or a service request processing apparatus of an embodiment of the present invention may be applied.
As shown in fig. 8, a system architecture 800 may include terminal devices 801, 802, 803, a network 804, and a server 805. The network 804 serves as a medium for providing communication links between the terminal devices 801, 802, 803 and the server 805. The network 804 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 805 through the network 804 using the terminal devices 801, 802, 803 to receive or send messages or the like. Various communication client applications such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 801, 802, 803.
The terminal devices 801, 802, 803 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 805 may be a server providing various services, such as a background management server (by way of example only) that provides support for shopping-type websites browsed by users using the terminal devices 801, 802, 803. The background management server may analyze and process the received data such as the product information query request, and feedback the processing result (e.g., the target push information, the product information—only an example) to the terminal device.
It should be noted that, the service configuration method or the service request processing method provided in the embodiment of the present invention is generally executed by the server 805, and accordingly, the service configuration device or the service request processing device is generally disposed in the server 805.
It should be understood that the number of terminal devices, networks and servers in fig. 8 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 9, there is illustrated a schematic diagram of a computer system 900 suitable for use in implementing an embodiment of the present invention. The terminal device shown in fig. 9 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 9, the computer system 900 includes a Central Processing Unit (CPU) 901, which can execute various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage section 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for the operation of the system 900 are also stored. The CPU 901, ROM 902, and RAM 903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
Connected to the I/O interface 905 are an input section 906 including a keyboard, a mouse, and the like, an output section 907 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like, a storage section 908 including a hard disk, and the like, and a communication section 909 including a network interface card such as a LAN card, a modem, and the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as needed. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 910 so that a computer program read out therefrom is installed into the storage section 908 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from the network via the communication portion 909 and/or installed from the removable medium 911. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 901.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, a processor may be described as comprising a sending module, an obtaining module, a determining module and a first processing module. The names of these modules do not in some cases limit the module itself, and for example, the transmitting module may also be described as "a module that transmits a picture acquisition request to a connected server".
As a further aspect, the invention also provides a computer readable medium which may be comprised in the device described in the above embodiments or may be present alone without being fitted into the device.
The computer readable medium carries one or more programs which, when executed by a device, cause the device to include:
acquiring a configuration file of a service;
Determining an interface of the service and identification information corresponding to the interface according to the configuration file, wherein the interface comprises an identification of a container, and an algorithm model is deployed in the container;
And correspondingly storing the interface and the identification information in a configuration database.
The computer readable medium carries one or more programs which, when executed by a device, cause the device to include:
Receiving a service request sent by a service party, wherein the service request comprises parameters and identification information corresponding to an interface;
acquiring the interface from a configuration database according to the identification information, wherein the interface comprises an identification of a container, wherein an algorithm model is deployed in the container;
Transmitting the parameters into the interface so that the interface transmits the parameters to the container according to the identification of the container and outputs a calling result according to a calculation result fed back by the container, wherein the calculation result is determined by the container according to the parameters and the algorithm model;
And determining a request result according to the calling result, and feeding back the request result to the service party.
According to the technical scheme of the embodiment of the invention, the dynamic configuration of the service can be realized through the configuration file. The algorithm model is separated from the configuration file, so that the algorithm model is decoupled from the business logic, and when the algorithm model corresponding to the business changes, the code does not need to be re-developed, the development cost is reduced, and the business request processing efficiency is improved.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.