CN119276861A - Computing network scheduling method, network domain, cloud side node, computing power gateway, system and medium - Google Patents
Computing network scheduling method, network domain, cloud side node, computing power gateway, system and medium Download PDFInfo
- Publication number
- CN119276861A CN119276861A CN202310835746.3A CN202310835746A CN119276861A CN 119276861 A CN119276861 A CN 119276861A CN 202310835746 A CN202310835746 A CN 202310835746A CN 119276861 A CN119276861 A CN 119276861A
- Authority
- CN
- China
- Prior art keywords
- service
- computing
- information
- computing power
- gateway
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/66—Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/40—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/1066—Session management
- H04L65/1073—Registration or de-registration
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Multimedia (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application provides a computing network scheduling method, a network domain, cloud side nodes, a computing gateway, a computing system and a medium. The method comprises the steps of generating a service identifier according to a service registration request, determining a calculation force quantity index according to a resource state of a service instance corresponding to the service identifier, transmitting the service identifier and SLA information accessed by a corresponding user to a calculation force gateway, and calculating a calculation force route according to the calculation force quantity index, the calculation force information announced by the calculation force gateway and the SLA information.
Description
Technical Field
The present application relates to the field of communications technologies, and for example, to a computing network scheduling method, a network domain, a cloud side node, a computing gateway, a system, and a medium.
Background
The development goal of the computing power network is to flexibly meet the integrated demands of various services on computing resources and network connection. For different demand scenarios, the focus of the integrated scheduling is different, for example, experience class, that is, service-level agreement (Service-LEVEL AGREEMENT, SLA) index related to quality of Service (Quality Of Service, QOS), cost class, that is, cost/energy consumption of Service resources, and resource class, that is, service resource use condition or state, etc. Taking edge computation as an example, the service is usually scheduled to the nearest node, but is not necessarily the optimal node, especially for extremely sensitive services such as delay, the service instance is often selected only according to computation force, and the optimal selection cannot be found. Because the integration of computing power perception and computing network scheduling cannot be realized, the scheduling requirements of different types cannot be met, and the equipment resource consumption and the cost are higher.
Disclosure of Invention
The application provides a computing network scheduling method, a network domain, cloud side nodes, a computing gateway, a computing system and a medium.
The embodiment of the application provides a calculation network scheduling method which is applied to network domain nodes and comprises the following steps:
Generating a service identifier according to the service registration request;
determining a calculation strength measurement index according to the resource state of the service instance corresponding to the service identifier;
the service identification and SLA information accessed by the corresponding user are issued to the power computing gateway;
And calculating the computational effort route according to the computational effort index, the computational effort information advertised by the computational effort gateway and the SLA information.
The embodiment of the application also provides a computing network scheduling method which is applied to the cloud side node and comprises the following steps:
sending a service registration request to a network domain node according to the service type of the nano-tube;
acquiring the resource state of a service instance corresponding to the service identifier under the condition that the service registration is successful;
And notifying the computing power information to a computing power outlet gateway (EGRESS GATEWAY, EGW) according to the resource state information, wherein the computing power information is diffused to an upstream gateway by the computing power outlet gateway.
The embodiment of the application also provides a computing network scheduling method which is applied to the computing power entry gateway (INGRESS GATEWAY, IGW) and comprises the following steps:
Acquiring the power information advertised by the power outlet gateway, and service identification issued by the network domain node and service level agreement SLA information accessed by the corresponding user;
calculating a computing power route corresponding to a specific service identifier according to the computing power index, the computing power information and the SLA information;
And forwarding the service request message to a corresponding computing power outlet gateway according to the computing power route.
The embodiment of the application also provides a computing network scheduling method which is applied to the computing power outlet gateway and comprises the following steps:
acquiring a resource state of a service instance;
determining a calculation vector index and a local service route according to the resource state;
Installing virtual links and link attribute information corresponding to the virtual nodes, wherein the virtual links and link attribute information is obtained by selecting a service instance for a specific service identifier according to a local service routing surface by a computing power outlet gateway;
And diffusing the computing power information advertised by the cloud side to a computing power entry gateway.
The embodiment of the application also provides a network domain node which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the network computing scheduling method when executing the program.
The embodiment of the application also provides a cloud side node which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the network computing scheduling method when executing the program.
The embodiment of the application also provides a computing power entry gateway, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computing network scheduling method is realized when the processor executes the program.
The embodiment of the application also provides a computing power outlet gateway, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computing network scheduling method is realized when the processor executes the program.
The embodiment of the application also provides a computing network scheduling system, which comprises a network domain node and the cloud side node;
The network domain node comprises a computing network brain, packet network equipment, the computing power entry gateway and the computing power exit gateway.
The embodiment of the application also provides a computer readable storage medium, and a computer program is stored on the computer readable storage medium, and when the program is executed by a processor, the method for dispatching the computing network is realized.
Drawings
FIG. 1 is a schematic diagram of a power network according to an embodiment;
FIG. 2 is a flowchart of a method for computing network scheduling according to an embodiment;
FIG. 3 is a schematic diagram of a service identifier-based power network operation process according to one embodiment;
FIG. 4 is a flowchart of another method for computing network scheduling according to one embodiment;
FIG. 5 is a flowchart of another method for computing network scheduling according to one embodiment;
FIG. 6 is a flowchart of another method for computing network scheduling according to one embodiment;
FIG. 7 is a schematic diagram illustrating an implementation of an integrated computing power aware and computing network scheduling system according to an embodiment;
FIG. 8 is a schematic diagram of an implementation of a control plane according to an embodiment;
FIG. 9 is a schematic diagram of a local service routing table according to an embodiment;
FIG. 10 is a schematic diagram of a global service routing table according to one embodiment;
FIG. 11 is a schematic diagram illustrating implementation of a forwarding plane according to one embodiment;
FIG. 12 is a schematic diagram of a computing network scheduling apparatus according to an embodiment;
FIG. 13 is a schematic diagram of another computing network scheduling apparatus according to an embodiment;
FIG. 14 is a schematic diagram of a configuration of a further computing network scheduling apparatus according to an embodiment;
FIG. 15 is a schematic diagram of a configuration of a computing network scheduling apparatus according to an embodiment;
fig. 16 is a schematic hardware structure diagram of a network domain node according to an embodiment;
Fig. 17 is a schematic hardware structure of a cloud node according to an embodiment;
FIG. 18 is a schematic diagram of a hardware architecture of a computing ingress gateway according to an embodiment;
FIG. 19 is a schematic diagram of a hardware configuration of a computing power outlet according to an embodiment;
fig. 20 is a schematic structural diagram of a computing network scheduling system according to an embodiment.
Detailed Description
The application is described below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting thereof. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be arbitrarily combined with each other. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings.
Fig. 1 is a schematic diagram of a computing network according to an embodiment. As shown in fig. 1, the operational computing power network generally includes an end side, a network side (may also be referred to as a network domain), and a cloud side, where network domain nodes mainly include a computing network brain, and the computing network brain may be a software defined network (Software Defined Network, SDN) controller. Nodes in the network domain may also include an algorithm gateway, specifically including IGW and EGW, where IGW may be used to maintain a global service routing table and perform path computation in a distributed computation scenario, and EGW may be used to advertise algorithm information, maintain a local service routing table, select a service instance, and so on. The process of sensing the computational power can be realized by directly butting the computational network brain with a cloud management platform, and also can be realized by directly butting the EGW with a sensing module deployed in a cloud resource pool, wherein the two sensing modes are not essentially different for the integrated dispatching and sensing of the resource model of the computational network, and the latter is mainly taken as an example for the following embodiments.
In order to realize integrated computing network scheduling, the embodiment of the application can solve the problems that (1) centralized computing and distributed computing are supported at the same time, the two computing networks can be flexibly switched, a scheduling method is required to be designed to realize the integration of computing power sensing and computing network scheduling, computing power routing request and issuing can be completed, (2) computing domain quality (health degree grading, average processing time delay and economic cost) and network domain quality (bandwidth, time delay, jitter and packet loss) are not uniform, a preferred factor is required to be defined according to a concerned point, other factors are required to serve as constraint conditions, constraint conditions are increased, computing time is increased, CPU resources are consumed, large-scale deployment is not required, different EGWs can be hung on service instance resources of the same type from the aspect of IGW, the same EGW can be hung on a plurality of service instance resources of the same type, computing power information is required to be continuously updated to upstream IGW under the general condition, 100 service instances are hung on 100 service instances, computing power information is refreshed once per minute according to 10 seconds, at least IGW 00 processing needs to update processing time, and at least BGP (BGP) is required to update the service instance resources on the same type, and the actual computing power system is required to be updated on the network system, and the network is high in quality is required to be updated, and the network is large in volume is required to be updated, and the network is large in the control and the network is large in the volume is required to be updated.
Fig. 2 is a flowchart of a method for computing network scheduling, which is applied to a network domain node according to an embodiment. The network domain node mainly comprises a computational network brain in a computational power network, such as an SDN controller. Wherein part of the steps can also be realized by a computing gateway. As shown in fig. 2, the method provided in this embodiment includes:
In step 110, a service identification is generated from the service registration request.
In step 120, a calculation metric is determined according to the resource status of the service instance corresponding to the service identifier.
In step 130, the service identification and SLA information accessed by the corresponding user are issued to the computing gateway.
In step 140, computing a computing force route according to the computing force index, the computing force information advertised by the computing force gateway, and the SLA information.
In this embodiment, the service registration request may be initiated by the cloud-side node to the computing Network brain, where the computing Network brain may generate a corresponding service identifier for each type of service, which may also be referred to as a service aware Network identifier (SAN-ID), and the computing Network brain may issue the SAN-ID and corresponding SLA information to each computing gateway according to the registered service for conversion and mapping of the internal service quality requirements of each computing gateway. The computing power network supports centralized computing, namely, the computing network brain can compute a flow engineering computing path according to the computing power index, the computing power information announced by the computing power gateway and the SLA information, and also supports distributed computing, namely, the IGW can compute the flow engineering computing path according to the computing power index, the computing power information announced by the computing power gateway and the SLA information.
In this embodiment, different types of services are distinguished through the service identifier, so that the similar services hung by each EGW can be abstracted into a virtual node, and the computational power routing problem can be reduced to a network traffic engineering problem. In computing the computational power route, it is possible to determine, on the one hand, the reachable path from the head node to the virtual node, which EGW is explicitly reached, and on the other hand, which specific service instance the EGW is to. In the case of centralized computation, the reachable path (traffic engineering computation path) can be determined by the computing network brain, and in the case of distributed computation, the reachable path can be determined by the IGW. Further, the specific service instance may be determined by the EGW.
In this embodiment, the calculation strength metric mainly includes an index for sensing or measuring the calculation network resources, which corresponds to the network, and the SLA information mainly corresponds to the access of the user, where the SLA information can be flexibly set, and can meet the calculation network integrated multi-factor scheduling requirement, so as to meet the scheduling targets of experience class, cost class and resource class scenes. By integrating calculation force sensing and calculation network scheduling, the equipment resource consumption and investment can be reduced, the pressure of a control surface and a forwarding surface caused by calculation force information scale expansion is solved, and the system has good floor type and expansibility. By uniformly distributing SAN-ID and mapped SLA information to the computing gateway, a foundation is provided for implementing computing network joint traffic engineering (TRAFFIC ENGINEERING, TE) and computing routing preferences.
The computing power information can be obtained by a computing power perception module of the EGW docking cloud side. The relevant information elements of computing power perception include, but are not limited to, SAN-ID, service instance IP, time delay (cloud service time delay), physical bandwidth (cloud total capacity), occupied bandwidth (cloud occupied capacity), cost (cloud cost) and the like, wherein the physical bandwidth and occupied bandwidth can be comprehensively scored according to cloud side relevant indexes (such as CPU utilization, storage resource occupation condition and the like).
In the embodiment, the joint modeling and perception of the computing network resources are performed based on the SLA information corresponding to the SAN-ID, so that the computing network joint TE problem can be converted into a path which meets the requirement of the SLA information from the head node to the virtual node of the service instance. According to the TE configuration, a path computation element communication protocol (Path Computation Element Communication Protocol, PCEP) can be adopted to initiate a centralized computation request (applicable to a cross-domain scene) and a distributed head node IGW computation request (applicable to a single-domain fast convergence scene) respectively.
In one embodiment, the service registration request includes a service domain name, a service IP and port, SLA information, and resource information.
In an embodiment, the Metric (Metric) of the SLA information includes time delay, cost, bandwidth, jitter and reliability, and the calculated Metric includes network time delay obtained by calculating time delay conversion, network cost obtained by converting resource cost, interface bandwidth obtained by converting total amount of resources, and interface bandwidth occupation obtained by converting resource occupation.
In this embodiment, the SLA information is unified into five key metrics of end-to-end access delay, cost, available bandwidth, jitter and reliability, and according to different scheduling targets, the metrics of the SLA information may include experience classes, such as delay, jitter and reliability, cost classes, such as cost or energy consumption, that is, cost, resource classes, such as resource usage conditions or states, for example, cloud-side residual resources are converted into available bandwidths. In addition, the calculation time delay, the resource cost, the total amount of resources and the resource occupation are correspondingly and respectively converted into the network time delay, the network cost, the interface bandwidth and the interface bandwidth occupation to obtain the calculation force quantity index, and the calculation force sensing result can be converted into the network factor to realize the integration of the calculation network.
In one embodiment, the computing power route comprises a global service route and a local service route, the computing power route is calculated according to computing power indexes, computing power information announced by a computing power gateway and SLA information, the computing power route comprises the steps of determining a traffic engineering calculation path between a computing power entry gateway and a virtual node mapped by a specific service identifier according to the computing power indexes, the computing power information announced by the computing power gateway and the SLA information, matching the traffic engineering calculation path with a global service route table to obtain the global service route, determining a service instance corresponding to the specific service identifier according to the resource state of the service instance to obtain the local service route, wherein the second node of the traffic engineering calculation path corresponds to a computing power exit gateway IP, and the end node of the traffic engineering calculation path corresponds to the virtual node mapped by the service identifier.
In this embodiment, the specific service identifier may be understood as a service identifier that needs to calculate a computational power route, for example, a service identifier corresponding to a current service request message. According to the calculation force index, network TE index and SAN-ID corresponding SLA information requirement, the calculation network brain or IGW head node firstly determines the flow engineering calculation path between the calculation force entry gateway and the virtual node mapped by the service identifier to obtain the reachable path between the virtual node S-VIP connected from IGW to EGW, then matches with the global service routing table to obtain the global service route, the matching principle is that the last second node in the flow engineering calculation path is the EGW gateway selected by the corresponding SAN-ID, the last node is the virtual node mapped by the SAN-ID, wherein the global service routing table can be generated by IGW after the calculation force information is diffused to IGW. In addition, the EGW may determine the service instance corresponding to the specific service identifier according to the resource status of the service instance, so as to obtain the local service route. On the basis, the service route and the calculation of the calculation network path can be separated, and arbitrary deployment of the public network and the private network of the calculation resource can be realized.
In an embodiment, virtual links and link attribute information corresponding to the virtual nodes are obtained by selecting the service instance according to a local service routing table by the computing power egress gateway, wherein the local service routing table comprises a virtual routing forwarding identifier, a service instance IP and corresponding computing power metric information, and the virtual links and link attribute information is diffused by an internal gateway protocol (Interior Gateway Protocol, IGP) protocol.
In this embodiment, the EGW may maintain relevant information of a specific service identifier through a local service routing table, and may prefer a relevant service instance, and make a decision to advertise and withdraw the computing power information upstream according to the presence or absence of the preferred result.
In one embodiment, the global service route comprises virtual private network information, service identification, service color, an algorithm exit gateway IP, an algorithm exit gateway segment identification and a virtual private network segment identification, wherein the global service route is formed by diffusing algorithm information from the algorithm exit gateway to the algorithm entrance gateway, and a bearing protocol of the global service route information is expanded based on BGP protocol family.
In this embodiment, the same class of service may be abstracted into one virtual node in the virtual private network. The information elements of the power route include, but are not limited to, VPN information, SAN-ID, service COLOR (COLOR), power egress gateway IP (GW-IP home node IP), power egress gateway segment identification (GW-SID home node SID), and segment identification (VPN-SID, SID of the home node identification VRF) corresponding to the virtual private network in virtual route forwarding. The bearing protocol of the information of the computational force route is implemented by adopting a border gateway protocol (Border Gateway Protocol, BGP) family extension, such as BGP multi-protocol extension (MultIProtocol Border Gateway Protocol, MP-BGP).
In an embodiment, the computing power information does not include information of a service instance and a preferred item corresponding to the specific service identifier, the computing power information is determined whether to announce and withdraw by the computing power outlet gateway according to whether the corresponding preferred item exists locally, the computing power information uses the virtual route forwarding identifier and the service identifier as key indexes, and the computing power information is diffused to the computing power inlet gateway by the computing power outlet gateway and is uniformly processed by the computing power inlet gateway.
In this embodiment, a local (or local) service routing table (Service Routing Information Base, SRIB) may be constructed in the EGW, and for the local SRIB, only one preferred entry exists for SAN-ID, so that VPN-related computing information may be formed for diffusion. In principle, the EGW may process the cloud-side perceived computing power information to obtain a preferred entry for the computing power instance. For example, after the EGW receives the computing power information, according to VPN information deployed by the computing power resource, VRF-ID and SAN-ID are used as key indexes, and in combination with factors such as load computation delay, available bandwidth, cost and the like of the service instance IP, corresponding entries are maintained in SRIB, and local preference is implemented on each entry according to the requirement of SLA information corresponding to the SAN-ID. For example, for a service of a certain type, the SLA information indicates that the delay is concerned, the EGW may locally optimize each entry according to the delay of the service instance, and the obtained optimized entry may be issued as a service forwarding table (Service Forwarding Information Base, SFIB) to the forwarding plane for guiding the forwarding of the service request message. By establishing a two-stage service routing model, and the service instance can be flexibly and locally optimized by the EGW, the pressures of a forwarding plane and a control plane can be reduced.
In one embodiment, the advertised calculation force information may not be updated as the preferred results change, thereby effectively reducing the pressure of the control surface. Furthermore, the advertised computational power information is only revoked if the preferred entry for a particular SAN-ID is zero.
In one embodiment, each SAN-ID is configured with a public network service virtual NODE IP (S-VIP for short), a service virtual NODE segment identifier (S-NODE-SID for short), and a service COLOR, which have globally unique and one-to-one mappings. The EGW may be installed to a traffic engineering DataBase (TRAFFIC ENGINEERING DataBase, TEDB) together with the local SRIB for a preferred entry corresponding to a specific SAN-ID and a public network service virtual node IP (S-VIP) corresponding to the SAN-ID, with the time delay, bandwidth and cost of computing power corresponding to the virtual node and the preferred entry as link attributes connecting the EGW and the S-VIP node, and the flooding and spreading may be performed along with the IGP domain, that is, the TEDB installation and IGP spreading may be performed on the basis of implementing local preference, thereby effectively reducing the control plane information spreading scale. In some embodiments, EGW may not be locally preferred, and TEDB installations and IGP diffusion may be directly based on local SRIB. By abstracting the computational power information of the network resources into computational power metrics, virtual nodes and links are installed to IGP, so that the computational network integrated path computation is converted into a network problem. In addition, since an independent service routing layer is established, the EGW does not need to issue routing information containing service instance segments to the IGW.
In an embodiment, in a distributed computing scenario, according to the TE configuration of the IGW, the TE computing component of the IGW may calculate a path between the IGW and the virtual node S-VIP connected to the EGW according to the SAN-ID corresponding SLA information requirement based on the local TEDB, and generate segment routing POLICY (SR-POLICY) information corresponding to the TE, where the KEY value (KEY) includes COLOR and S-VIP. A segment list (SEGMENT LIST) is formed of SIDs at each node and link along the path, where the penultimate SID represents the EGW gateway (EGW-SID) selected for the SAN-ID and the last SID represents the virtual node.
In an embodiment, the IGW may receive the calculated routes of different EGWs to form SRIB of the different EGWs, where elements include, but are not limited to, VRF-ID, SAN-ID, EGW-IP, COLOR, EGW-SID, VPN-SID, etc., where VRF-ID, SAN-ID are KEY, and in the case of the same KEY, different EGW related information (including EGW-IP/EGW-SID/VPNSID) entries may be managed as multiple next hops, and based on the COLOR, S-VIP of each entry in SRIB matching the entry of each SR-POLICY, such as EGW-SID and the last SID in the SR-POLICY segment list in SRIB being equal, the SRIB entry may be set as preferred, that is, selecting a next hop and an explicitly reachable path in the global service routing table may generate a global SFIB for sending to the forwarding plane to direct the service request message forwarding.
Fig. 3 is a schematic diagram of a service identifier-based operation procedure of a computing power network according to an embodiment. As shown in fig. 3, the power network operation process includes:
1) Registering the service, namely registering a cloud side operation management system to a computing network brain according to the service type of a nano tube, wherein registration elements comprise service domain names, service IP and ports, SLA information corresponding to the service, the SLA information comprises, but is not limited to, time delay, bandwidth (which can also be available), reliability (such as packet loss condition), jitter, COST (which can also be called COST, COST or COST), resource utilization condition (link occupation or cloud resource pool occupation), combination thereof and the like, and the aim of the process is to enable the computing network brain to generate service identification (SAN-ID);
2) The computing network brain transmits SAN-ID and corresponding SLA information to each computing gateway according to the registered service for the conversion and mapping of the service quality requirement in the gateway;
3) The cloud side can start a service sensing module after receiving a response message that service registration is successful, and acquire cloud side resource state information;
4) The service perception module announces the calculation information to EGW (BGP can be adopted), EGW diffuses calculation information upstream (local preference can be implemented and preference items can be diffused), IGW or EGW as a announcer (Speaker) of BGP Link-state (BGP-LS) announces calculation information to calculation network brain;
5) The method comprises the steps of carrying out centralized calculation on a path request, wherein under the scene of unsuitable or non-supporting distributed head node calculation such as cross-domain or IGW resource limitation, the IGW can initiate a PCEP request to calculate a flow engineering calculation path from a calculation network brain to a virtual node in a centralized way;
6) And issuing a calculation result, namely for a centralized calculation scene, after the calculation network brain finishes the calculation of the flow engineering calculation path, issuing the calculation result to the IGW by utilizing a PCEP channel or a BGP-SR-POLICY channel, and for a distributed scene, locally finishing the calculation and the issuing of the flow engineering calculation path by the IGW.
7) The terminal initiates domain name resolution according to the original service domain name to obtain a service identifier, which is used for service initiation;
8) After the terminal obtains the service identification, the terminal protocol stack can select different encapsulation modes to send service request messages according to different service identification types, for example, an Anycast (Anycast) IP is adopted to characterize the service, and the service identification can be directly placed in a target IP; if the digitalized service identifier is adopted, the service identifier is often required to be carried through an IP extension head or other extension modes, the destination IP of the service request message is IGW IP, and the purpose of the service request message is to penetrate through an access network to route to the IGW and analyze the service identifier in any packaging mode so as to realize service access;
9) And (3) flow scheduling, namely after the service request message reaches the IGW, the IGW analyzes the service identifier and searches the calculation route, selects different tunnel strategies and encapsulates the tunnel header message for hop-by-hop forwarding, and after the service request message reaches the EGW, the EGW can search a corresponding service instance according to the service identifier and forward the message to a service instance node so as to provide service.
It should be noted that the embodiments of the present application are mainly described based on networks based on IPv 6-based source routing technology (Segment Routing Over IPv, SRv) and are equally applicable to SR multiprotocol label switching (Multi-Protocol Label Switching, MPLS) based networks in some embodiments. For the calculation force sensing component, although the calculation force is obtained by taking the butt-joint cloud side calculation force sensing module as an example in the scheme for simplifying the explanation, in actual deployment, the calculation force sensing component of each device in the calculation force gateway or the calculation force network can be informed by the calculation network brain butt-joint cloud side operation management system, and the preferable item can be informed after local optimization is implemented so as to reduce the pressure of a control surface and the information diffusion scale.
According to the computational network scheduling method, aiming at the problem that multiple factors affect the scheduling algorithm efficiency, computational power related factors (experience, COST, resources and the like) are unified to be modeled in the same dimension as network factors, the number of factors is reduced, computational network joint calculation is not needed in a specific scene such as a scene with a focus being the COST, only a service instance of computational power is needed to be optimized, the network factors are relatively unimportant, computational power factor dominant optimization can be achieved by increasing the range of original modeling computational power COST (enabling the range of COST to exceed the maximum sum of computational power networks), and adaptability is extremely high. By abstracting the computing power service into virtual nodes, abstracting the computing power instance into links for connecting the EGW and the virtual nodes, taking the state of the computing power instance as a virtual link attribute, and optimizing the computing power instance through the EGW, the number of links connected by the virtual nodes is greatly reduced, the virtual nodes and the link information are directly injected into the TEDB to be diffused by using the IGP, the interaction mechanism of the virtual nodes and normal nodes is avoided, the joint computation of the computing network is normalized to the conventional network TE problem, and the computing mode is completely compatible with the network distributed computing path and the centralized controller computing path of the traditional network TE path. By designing a calculation routing layer independent of an IP routing connection layer and a two-stage calculation routing model, VPN deployment is supported, IGWs can directly search calculation forwarding table encapsulation tunnels according to calculation identifiers to reach EGWs and then select calculation examples by the EGWs, the situation that the calculation identifiers are adopted to expand other drainage modes is avoided, the EGWs sense whether calculation services and examples exist or not, whether to release and cancel routes is judged, the calculation routing is released upstream by taking the EGWs as the next hop, and the burden of a control plane is greatly reduced. In addition, the EGW does not need to issue IP routing information reaching the computing power instance to the IGW, reduces routing deployment configuration and enhances the security and expansibility of the whole computing power routing domain connection. IGW carries out association searching by adopting COLOR and service virtual node IP based on TE route calculation result and VPN route calculation, and entries meeting the conditions are matched and optimized by adopting corresponding EGW-SID and SEGMENTLIST penultimate SID in SR-POLICY, so that the problem of joint calculation and route calculation of the network is converted into a conventional route domain to be solved, and VPN deployment with flexible calculation resources can be realized.
The computing network scheduling method has strong floor property, adaptation capability of different scenes and expansibility, and can accelerate the rapid floor of the computing network in the current network environment. The method comprises the steps of enabling a network intermediate node to transparentize calculation force information, eliminating equipment upgrading and configuration adjustment, facilitating network deployment, enabling a centralized and distributed calculation network to integrally calculate and integrating centralized and distributed TE path calculation of a traditional network, enabling a calculation network brain and a network controller to be free of adding other perception calculation force channels, enabling the calculation network brain and the network controller to be high in floor performance, enabling calculation force factors (experience, cost, resources and the like) and network factors to be subjected to normalization modeling, avoiding the problems of lower efficiency, complexity and the like caused by calculation network joint calculation, enabling complex SLA requirements of the calculation network to be converted into various TE algorithm expansion, enabling a calculation force service identification scheme and a control plane to be completely decoupled, enabling different identification schemes to only affect calculation force gateway identification encapsulation and analysis, enabling control plane software systems to be extremely high in multiplexing, enabling Metric to continuously expand constraint conditions (in this case, taking time delay as a target) in the future, enabling a controller to be initiated according to TE configuration, enabling PCEP to be selected to be adopted for centralized calculation (single domain scene) and head node calculation (single domain fast convergence).
It should be noted that, in the case of distributed computation, the flow engineering computation path from the IGW to the virtual node may be computed by the IGW, and in the case of centralized computation, the computation network brain may be used to complete the generation of the corresponding SR-POLICY and issue the SR-POLICY to the IGW, and other subsequent complete computation routing optimization and installation processes are completely decoupled from the centralized and distributed steps.
Fig. 4 is a flowchart of a computing network scheduling method, which is applied to cloud nodes and is not described in detail in this embodiment, and may refer to any of the above embodiments. As shown in fig. 4, the method provided in this embodiment includes:
In step 210, a service registration request is sent to the network domain node according to the service type of the nanotube.
In step 220, the resource status of the service instance corresponding to the service identifier is collected in case the service registration is successful.
In step 230, the computing power information is announced to the computing power outlet gateway according to the resource status information, and the computing power information is diffused to the upstream gateway by the computing power outlet gateway. Further, traffic may be directed to the service instance by a computing force ingress gateway and the computing force egress gateway.
Fig. 5 is a flowchart of a method for scheduling an algorithm network, which is applied to an algorithm ingress gateway, and technical details that are not described in detail in this embodiment can be seen in any of the above embodiments. As shown in fig. 5, the method provided in this embodiment includes:
In step 310, the computing power information advertised by the computing power export gateway is obtained, and the service identifier and the service level agreement SLA information accessed by the corresponding user are issued by the network domain node.
In step 320, a computing power route corresponding to the specific service identifier is calculated according to the computing power index, the computing power information and the SLA information.
In step 330, the service request packet is forwarded to the corresponding computing power egress gateway according to the computing power route.
In an embodiment, further comprising:
Maintaining a global service routing table with a plurality of next hops of the specific service identifier according to the received computing power information of the plurality of computing power outlet gateways;
And constructing a traffic engineering model according to the SLA information corresponding to the specific service identifier and the corresponding virtual node, wherein the traffic engineering model comprises color information and a virtual node IP mapped by the service identifier.
In an embodiment, the computational power route comprises a global service route;
calculating the computing power route corresponding to the specific service identifier according to the computing power index, the computing power information and the SLA information, wherein the computing power route comprises the following steps:
Determining a traffic engineering calculation path from a computing force entry gateway to a virtual node mapped by a service identifier according to the computing force index, the computing force information and the SLA information;
Matching the traffic engineering calculation path with a global service routing table to obtain the global service routing;
wherein the penultimate node of the traffic engineering computation path corresponds to a computing force egress gateway, and the end node of the traffic engineering computation path corresponds to a virtual node mapped by a service identifier.
In one embodiment of the present invention, in one embodiment,
The computing power information does not comprise service examples and preference item information corresponding to the specific service identification;
the computing power information is used for deciding whether to announce and withdraw by the computing power outlet gateway according to whether a corresponding preferred item exists locally;
the calculation force information takes a virtual route forwarding identifier and a service identifier as key indexes;
And the computing power information is diffused from the computing power outlet gateway to the direct computing power outlet gateway and is uniformly processed by the computing power inlet gateway.
In one embodiment, for the case of distributed computing, IGW may perform the process of computing traffic engineering computation paths and matching global service routing tables described in any of the embodiments above.
In an embodiment, the computing power information includes a preferred entry, where the preferred entry is determined by the computing power egress gateway according to computing power information advertised by the cloud side, SLA information corresponding to the service identifier, virtual private network VPN information, and a load of the service IP, and uses a virtual route forwarding identifier and a service identifier as key indexes.
Fig. 6 is a flowchart of a method for computing network scheduling, which is applied to a computing power egress gateway and is not described in detail in this embodiment, and may refer to any of the above embodiments. As shown in fig. 6, the method provided in this embodiment includes:
in step 410, the resource status of the service instance is obtained;
in step 420, a calculation metric and a local service route are determined based on the resource status.
In step 430, virtual links and link attribute information corresponding to the virtual nodes are installed, where the virtual links and link attribute information is obtained by the computing power egress gateway selecting a service instance for a specific service identifier according to the local service routing surface.
In step 440, the cloud-side advertised computational power information is disseminated to the computational power portal gateway.
In an embodiment, the calculation quantity index includes a network delay obtained by converting a calculation delay, a network cost obtained by converting a resource cost, an interface bandwidth obtained by converting a total amount of resources, and an interface bandwidth occupation obtained by converting a resource occupation;
the method further comprises the steps of:
A local service routing table is generated, the local service routing table containing virtual route forwarding VRF identifications, service instance identifications, and corresponding computation effort information.
In an embodiment, installing virtual links and link attribute information corresponding to virtual nodes includes:
According to the local service route, the IP corresponding to the service identifier is used as a virtual node, the calculation vector information of the corresponding item is used as the link attribute of the virtual link, and the IGP performs intra-domain flooding to realize intra-domain diffusion of the virtual link and the link attribute information.
In an embodiment, the computing power information does not include information specifying a service instance and a preference entry corresponding to the service identification;
The computing power information is determined whether to announce and withdraw by the computing power outlet gateway according to whether a corresponding preferred item exists locally or not;
the calculation force information takes a virtual route forwarding identifier and a service identifier as key indexes;
and the computing power information is diffused from the computing power outlet gateway to the computing power gateway inlet, and is uniformly processed by the computing power inlet gateway.
In an embodiment, further comprising:
And searching a local service routing table according to the service identifier in the service request message to determine a corresponding service instance, and forwarding the message to the corresponding service instance.
Fig. 7 is a schematic implementation diagram of an integrated computing power sensing and computing network scheduling system according to an embodiment, where the system may be used for computing network joint computing and computing power addressing for implementing different computing and network factor normalization in a computing power network. As shown in fig. 7, the related components and descriptions included in the overall technical architecture are as follows:
The cloud side computing power perception module can be newly added in the cloud side monitoring system, can perform convergence modeling on the computing power monitoring original index on the basis of fully understanding computing power scheduling requirements, and forms a small number of comprehensive indexes capable of stably reflecting computing power quality level, and the comprehensive indexes are taken as a Speaker of computing power states, and relevant notification information is seen in an X1 interface;
The computing power sensing component can be newly added on network Equipment (EGW), is connected with the (X1 interface) cloud side computing power sensing module, can serve as a first consumer of computing power information, and is connected with the (X2 interface) computing power route management component. The vibration suppression and the processing of the information are realized by configuring related strategies, and the processed qualified calculation information is injected into a calculation route management component;
The computing force route management component can be newly added on network equipment, and according to the received computing force state information, the functions of generating a local computing force route table SRIB, carrying out local optimization according to SAN-ID related optimization strategies (experience, cost, resources and the like), injecting the local computing force route table (X4 interface) to the BGP protocol component for external release, simultaneously forming a global computing force route table by taking the global computing force route as the EGW and receiving computing force routes released by the BGP component from a far-end BGP node, forming an abstract link and link parameter corresponding to each virtual computing force node according to the local service optimal computing force route table, injecting the abstract link and link parameter into a TEDB (X3 interface), taking the local computing force route table as an IGW node receiving (interface X6) SR-TE component, generating a final global computing force forwarding table according to the local VPN computing force route iteration, and sending (X7 interface) the final global computing force forwarding table, wherein 1-4 points are key points;
The message forwarding component is used for expanding the network equipment, receiving and installing a local/global computing force forwarding table issued by the computing force route management component (X7 interface), and realizing the flow scheduling and forwarding of the terminal computing force service request;
BGP protocol component, expanding network equipment to implement local uploading/downloading (X4 interface) and node-to-node (X5 node) diffusion of VPN calculation route;
and the SR-TE calculating component subscribes the related calculation result list item by the calculation force route management component and does not modify the external interface.
The key interfaces and illustrative examples contained in the overall technical architecture are as follows:
x1 and X2 interfaces, wherein X1 and X2 only have announcement frequency and bearer protocol and channel distinction on an interface model, and the information types comprise three types, namely adding, updating and deleting, and model content elements comprise but are not limited to service ID, service instance IP, time delay (cloud instance processing time delay), cost (cloud instance use cost), physical bandwidth (cloud instance available resources) and occupied bandwidth (cloud instance occupied resources);
The X3 interface comprises virtual NODE elements including but not limited to service ID, virtual NODE IP, virtual NODE SID, virtual link attributes including but not limited to time delay, cost, link bandwidth, link occupied bandwidth and the like, wherein the link attributes characterize the preferred local computing power instance state and generate link attribute refreshing along with the preferred change of the local computing power instance or state update;
The X4 interface comprises an EGW node uplink and an IGW node downlink, and the elements of the interface comprise VRF-ID, SAN-ID, COLOR, EGW-IP and EGW-SID;
The X5 interface adopts MP-BGP expansion, and the elements of the interface include, but are not limited to, VPN information (RT/RD), SAN-ID, COLOR, EGW-IP, EGW-SID and VPN-SID;
An X6 interface, which is to issue information for SR-POLICY, wherein the general index comprises Color and S-VIP, and the attribute at least comprises SEGMENT LIST { SID1, SID 2..EGW-SID, S-NODE-SID }, a direct connection interface and a gateway IP;
The X7 interface is a local computing force routing table element including but not limited to VRF-ID, SAN-ID and INS-IP, and the global computing force routing table element includes but not limited to VRF-ID, SAN-ID, VPN-SID and OIF is SR-POLICY (COLOR, EGW-IP), SEGMENT LIST { SID1, SID2.
The workflow of traffic steering in a power network is briefly described as follows:
(1) The client initiates a calculation service request, and the service request message carries SIDs, specifically, a plurality of carrying modes (ANYCAST addresses are used as SIDs, IP extension heads are carried, and the like), and the aim is that the service request message can reach IGW and the IGW senses the SIDs no matter what carrying mode.
(2) The IGW receives the service request message of the client, identifies the corresponding SID and selects the corresponding EGW, and issues a specific network path to meet the network quality requirement required by service access.
(3) The EGW receives the service request forwarded by the IGW, identifies the corresponding SID to select a proper service instance IP, simultaneously modifies the destination address of the service request message into the service instance IP, and forwards the service request message to the service instance by looking up an IP routing table to realize service connection;
(4) The service instance responds to the service request message, modifies the message source IP into the corresponding destination IP in the service request message on the EGW, and the subsequent links belong to the normal service process of the network.
For a control plane, in order to realize a calculation-aware traffic steering target, a conventional design concept of the network device control plane is that an IGP realizes network link attribute flooding and an IGP/BGP expansion realizes calculation awareness and notification to an upstream node, a Traffic Engineering Database (TEDB) and a Calculation Awareness Database (CADB) are formed, cross-layer association is needed to be carried out on the CADB and the TEDB, joint calculation (centralized or distributed) is carried out according to service access SLA requirements to obtain a service instance and a network path meeting requirements, and the two problems are brought about that 1) because of different calculation speed requirements, the centralized and the distributed are required to be supported simultaneously, a set of SDN system architecture similar to a SDN system architecture based on PCEP is repeatedly designed, 2) calculation domain quality (health degree scoring, average processing time delay, economic cost, resource occupation) and network domain quality (bandwidth, time delay, jitter and packet loss) are not uniform, calculation consumption time is increased along with constraint condition increase, and a large amount of CPU resource consumption cannot be deployed on a scale. Furthermore, computing instances providing the same service type are flexibly deployed to the same EGW and/or different EGWs, and if the EGW computing resource states are continuously updated to the upstream IGW, massive computing state information updating will put great stress on the control plane of the network device and even cause system crashes.
Fig. 8 is a schematic diagram of an implementation of a control plane according to an embodiment. In the embodiment, the flow steering SLA target can be unified into five key measurement indexes of end-to-end access delay, cost, bandwidth, jitter and packet loss, and according to different targets, 1) experience class is usually focused on delay, jitter and packet loss, 2) cost class is usually focused on cost/energy consumption and the like, namely cost, 3) resource class is focused on resource use condition/state, and if cloud residual resources are converted into available bandwidth, available bandwidth is focused on. In actual service deployment, one index of the preference is selected from the five measurement indexes according to different targets, and other indexes are selected together with the constraint condition. For example, as shown in fig. 8, taking distributed computing as an example, the control plane flow mainly includes:
S1, registering, upgrading or canceling the service according to the service type and the service instance of each service hanging. In addition, the computing power information may be obtained from a computing resource pool or a shipping system, and the manner of obtaining the computing power information is not limited in this embodiment.
S2, calculating load update by EGW. Registered service instances may be characterized by IP. And converting the acquired calculation force information into a dimension consistent with the network by recording calculation force measurement indexes (calculating metrics).
S3, the EGW can generate a local service routing table according to the calculation information obtained in the S2, wherein the local service routing table comprises VRF identification, service instance IP and calculating Metric. On this basis, local preference can be further performed, for example, preference is performed according to time delay, and the preference entry is obtained to be diffused upstream for the head node or the computing network brain to calculate the computing power route.
S4, the EGW forms a virtual node (representing which preferred services the current EGW is hooked to) and a link (preferred link, whose Metric is the Metric of the preferred computing power resource) according to the preferred service instance. On the basis, the number of links can be greatly reduced, and the network computing efficiency is improved. It should be noted that, it is not preferable that the number of links is determined by the number of service instances to which the EGW is attached. In this step, the computing power information of the computing power resource may be converted into TE attributes of the link.
And S5, based on the IGP, diffusing the TE link attribute and the virtual node information to upstream nodes, and flooding.
S6, after diffusion, enabling the IGW to have global link information.
S7, converting the calculation network joint scheduling problem into a network path problem, wherein the IGW can calculate a path from the IGW to a virtual node corresponding to the service identifier.
S8, EGW01 and EGW02 issue MP-BGP VPN service routes. S9 may be performed in synchronization with S5/S6.
S9, generating a global service route table to form a global service route with a plurality of EGW next hops. For example, if there are two preferred entries per EGW for each service, a total of four global service routes may be formed.
And S10, jointly matching the result (EGWIP of the next-to-last hop) of the S7 with the result (whether the next-to-last hop is EGW01 or EGW 02) of the route generated by the S9, determining whether to forward the result to the EGW01 or EGW02, and obtaining a final forwarding table, namely the global service route.
On this basis, the service instance may be further determined by EGW01 or EGW 02.
Fig. 9 is a schematic diagram of a local service routing table according to an embodiment. The EGW perceives the state of the computing instance from EDGEMANAGER, takes the VRF-ID and SID as indexes according to the VPN information deployed by the computing resource, combines the service instance IP and the delay, bandwidth and cost elements of the computing resource, maintains corresponding service instance entries in the SRT (refer to fig. 9), implements local processing according to SLA constraints corresponding to the SID (if the service SLA pays attention to the delay, then locally according to the instance delay is preferred), forms local SRT entry preference and issues the preference entry as SFT (refer to fig. 10) to the forwarding plane for computing service request processing and forwarding.
When a preferred service instance exists in a specific SID in the local SRT, the EGW issues a route update message to the IGW, once the preferred service instance becomes zero due to the degradation of the resource state, the EGW issues a route withdrawal message to the IGW, a bearer protocol is usually implemented by BGP protocol family extension (MP-BGP), and a carrying element comprises a message type, a SID, an EGW-IP, a VPN-SID, RT/RD and the like, and issues a service route instead of specific service instance information to the IGW, so that a service route layer independent of VPN IP route is formed, and the pressure of a control plane is reduced.
Based on the optimized entry of the local SRT facing each SID, the EGW installs the virtual node and the virtual link corresponding to the SID in the IGP, the virtual node is connected with the EGW through a virtual link, the time delay, the bandwidth and the COST of the optimized service instance are used as the link attribute of the virtual link, and the network metric value flooding diffusion is carried out along with the IGP domain, so that the control plane information diffusion scale is greatly reduced.
Fig. 10 is a schematic diagram of a global service routing table according to an embodiment. As shown in FIG. 10, the access computation service SID traffic steering preference on IGW translates into the traditional network traffic engineering problem that path computation is performed between IGW and SID corresponding virtual NODEs connected with EGW according to service SID corresponding SLA constraint, a corresponding SR-POLICY-1 path (COLO, endpoint:SID) is generated, wherein the penultimate SEGMENT ID (NODE) in SEGMENT LIST expresses the EGW preferred for service access under the current condition, and SR-POLICY-1 is translated into the SR-POLICY-2 path (COLOR, endpoint:EGW-IP) meeting the requirement.
The IGW receives VPN service route information issued by each EGW, and forms global SRT to a plurality of EGWs, namely VRF-ID and SID are taken as KEYs, and different EGWs-IP are taken as a plurality of next hops. SR-POLICY-2 is matched based on each COLOR and EGW-IP in the SRT, so that a preferred global SRT entry is obtained and a global SFT (refer to figure 7) is generated and issued to the IGW forwarding plane for traffic steering requests.
Fig. 11 is a schematic diagram of implementation of a forwarding plane according to an embodiment. As shown in fig. 11, an exemplary forwarding plane main flow is as follows:
the method and the system have the advantages that S1, a terminal initiates a computing power service request, a SAN-ID can be carried in various modes (ANYCAST addresses are used as the SAN-ID, an extension head is carried, and the like), the embodiment of the application is not limited to the carrying modes, and the aim is that a request message can reach an IGW and the IGW can sense the SAN-ID at the same time, and the computing power service request SAN-ID=SID1 is initiated by the UE;
S2, the IGW receives the service request message in S1, and searches VPN calculation force SFIB according to SAN-ID carried by the message, namely SID1, { vidx, SID1, vidx-EGW01-SID, py-SID1-EGW01};
S3, the IGW encapsulates the outer layer tunnel header and SRH (containing VPN SID: vidx-EGW 01-SID) according to SFIB contents, the inner layer message encapsulation is kept unchanged, the message is sent to an output interface, and finally the message is forwarded to EGW01 through an intermediate P1 node;
S4, EGW01 receives service request packet with tunnel encapsulation sent by IGW, removes tunnel encapsulation, takes SID1 according to vidx-EGW01-SID and carrying SAN-ID characteristics to search VPN SFIB to obtain instance IP1, and meanwhile converts DA in the packet into IP1 to form NAT item;
s5, EGW01 checks a local IP routing table to a service instance node aiming at the decapsulated message;
s6, the service instance node responds to the message according to the service request message, wherein the source IP is IP1, and the destination IP is UE_IP;
s7, EGW01 receives the response message, searches the NAT table in S4 according to message SRC=IP1, modifies the message source IP to SID1, searches the corresponding VPN route table according to the message destination IP, the VPN route table is derived from the route issued by IGW to EGW, is usually related to network planning and deployment, and can plan a specific SR-TE path according to the returned message, which is not limited by the embodiment of the application;
S8, EGW01 encapsulates the outer layer tunnel header and SRH (containing VPN SID) according to the result of table lookup, the inner layer message is encapsulated unchanged, the message is sent to the output interface, and finally the message is forwarded to IGW through the intermediate node;
S9, IGW processes normal message according to the received message in S8, which is a standard processing flow of the PE node of the L3VPN message, and is not described in detail herein;
And S10, the IGW searches a local VPN routing table according to the service message after the decapsulation in the S9, and finally sends the service message to the UE, so as to finish the UE service request and service instance response message round trip processing, and the subsequent UE initiates a message to repeat the S1-S10 flow.
The embodiment of the application also provides a device for dispatching the calculation network. Fig. 12 is a schematic structural diagram of a computing network scheduling apparatus according to an embodiment. As shown in fig. 12, the computing network scheduling apparatus includes:
a service registration module 510 configured to generate a service identifier according to the service registration request;
An index determining module 520 configured to determine a calculation metric according to the resource status of the service instance corresponding to the service identifier;
a transmitting module 530 configured to transmit the service identifier and the service level agreement SLA information accessed by the corresponding user to the power gateway;
a calculation module 540 configured to calculate a calculation route according to the calculation metric, the calculation information advertised by the calculation gateway, and the SLA information.
The computing network scheduling device of the embodiment distinguishes different types of services through the service identification, the SLA information corresponding to each service can be flexibly set, the computing network integrated multi-factor scheduling requirement can be met, the scheduling targets of experience class, cost class and resource class scenes are met, the computing power perception and the computing network scheduling are integrated, the equipment resource consumption and the investment can be reduced, the pressure of a control surface and a forwarding surface caused by the scale expansion of computing power information is solved, and the computing network scheduling device has good landability and expansibility. By uniformly issuing SAN-ID and mapped SLA information to the computing gateway, a foundation is provided for realizing computing network joint TE and computing routing preference.
In one embodiment, the service registration request includes a service domain name, a service IP and port, SLA information, and resource information.
In one embodiment, the measure of SLA information includes delay, cost, bandwidth, jitter and reliability;
the calculation quantity indexes comprise network time delay obtained by converting calculation time delay, network cost obtained by converting resource cost, interface bandwidth obtained by converting total resource quantity and interface bandwidth occupation obtained by converting resource occupation.
In one embodiment, the computing module 530 includes:
A path determining unit configured to determine a traffic engineering calculation path from the computing force entry gateway to the virtual node mapped by the service identifier according to the computing force index, the computing force information advertised by the computing force gateway, and the SLA information;
the matching unit is used for matching the traffic engineering calculation path with the global service routing table to obtain a global service route;
an instance determining unit, configured to determine a service instance corresponding to the specific service identifier according to a resource state of the service instance, so as to obtain the local service route;
wherein the penultimate node of the traffic engineering computation path corresponds to the computing power egress gateway IP, and the end node of the traffic engineering computation path corresponds to the virtual node mapped by the service identifier.
In an embodiment, the virtual links and link attribute information corresponding to the virtual nodes are obtained by selecting the service instance by the computing power outlet gateway according to a local service route table, wherein the local service route table comprises a virtual route forwarding identifier, a service instance IP and corresponding computing power quantity information;
the virtual link and link attribute information are disseminated by an interior gateway protocol IGP protocol.
In one embodiment, the global service route comprises virtual private network information, service identification, service color, computing power exit gateway IP, computing power exit gateway segment identification, virtual private network segment identification;
the global service route is formed by diffusing the computing power information from the computing power outlet gateway to the computing power inlet gateway;
the bearer protocol for the global service routing information is based on BGP protocol family extensions.
In an embodiment, the computing power information does not include information of a service instance and a preference entry corresponding to the specific service identifier;
the computing power information is used for deciding whether to announce and withdraw by the computing power outlet gateway according to whether a corresponding preferred item exists locally;
the calculation force information takes a virtual route forwarding identifier and a service identifier as key indexes;
And the computing power information is diffused from the computing power outlet gateway to the computing power inlet gateway and is uniformly processed by the computing power inlet gateway.
The network computing scheduling device provided in this embodiment and the network computing scheduling method provided in the foregoing embodiments belong to the same inventive concept, and technical details not described in detail in this embodiment can be seen in any of the foregoing embodiments, and this embodiment has the same advantages as those of executing the network computing scheduling method.
The embodiment of the application also provides a device for dispatching the calculation network. Fig. 13 is a schematic structural diagram of a computing network scheduling apparatus according to an embodiment. As shown in fig. 13, the computing network scheduling apparatus includes:
a request module 610 configured to send a service registration request to a network domain node according to a service type of the nanotube;
The acquisition module 620 is configured to acquire a resource state of a service instance corresponding to the service identifier under the condition that service registration is successful;
and an advertising module 630 configured to advertise, according to the resource status, the computing power information to the computing power egress gateway, the computing power information being diffused by the computing power egress gateway to the upstream gateway, and the computing power ingress gateway and the computing power egress gateway directing traffic to the service instance.
The network computing scheduling device provided in this embodiment and the network computing scheduling method provided in the foregoing embodiments belong to the same inventive concept, and technical details not described in detail in this embodiment can be seen in any of the foregoing embodiments, and this embodiment has the same advantages as those of executing the network computing scheduling method.
The embodiment of the application also provides a device for dispatching the calculation network. Fig. 14 is a schematic structural diagram of a computing network scheduling apparatus according to an embodiment. As shown in fig. 14, the computing network scheduling apparatus includes:
the acquiring module 710 is configured to acquire the power information advertised by the power outlet gateway, and service identifiers and service level agreement SLA information accessed by the corresponding user issued by the network domain node;
A route calculation module 720 configured to calculate a calculation power route corresponding to a specific service identifier according to the calculation power index, the calculation power information and the SLA information;
And a forwarding module 730, configured to forward the service request packet to a corresponding computing power egress gateway according to the computing power route.
In an embodiment, further comprising:
the maintenance module is used for maintaining a global service routing table with a plurality of next hops of the specific service identification according to the received computing power information of the plurality of computing power outlet gateways;
The construction module is configured to construct a traffic engineering model according to the SLA information corresponding to the specific service identifier and the corresponding virtual node, wherein the traffic engineering model comprises color information and a virtual node IP mapped by the service identifier.
In one embodiment, the power routes include global service routes;
the route calculation module 720 includes:
A path determining unit configured to determine a traffic engineering calculation path from a traffic engineering entry gateway to a virtual node to which a service identifier is mapped, according to the traffic metric, the traffic information, and the SLA information;
the matching unit is used for matching the traffic engineering calculation path with a global service routing table to obtain the global service routing;
wherein the penultimate node of the traffic engineering computation path corresponds to a computing force egress gateway, and the end node of the traffic engineering computation path corresponds to a virtual node mapped by a service identifier.
In an embodiment, the computing power information does not include service instance and preference entry information corresponding to the specific service identifier;
the computing power information is used for deciding whether to announce and withdraw by the computing power outlet gateway according to whether a corresponding preferred item exists locally;
the calculation force information takes a virtual route forwarding identifier and a service identifier as key indexes;
And the computing power information is diffused from the computing power outlet gateway to the direct computing power outlet gateway and is uniformly processed by the computing power inlet gateway.
The network computing scheduling device provided in this embodiment and the network computing scheduling method provided in the foregoing embodiments belong to the same inventive concept, and technical details not described in detail in this embodiment can be seen in any of the foregoing embodiments, and this embodiment has the same advantages as those of executing the network computing scheduling method.
The embodiment of the application also provides a device for dispatching the calculation network. Fig. 15 is a schematic structural diagram of a computing network scheduling apparatus according to an embodiment. As shown in fig. 15, the computing network scheduling apparatus includes:
A state acquisition module 810 configured to acquire a resource state of a service instance;
a determining module 820 arranged to determine a calculation metric and a local service route from the resource status;
The installation module 830 is configured to install a virtual link and link attribute information corresponding to the virtual node, where the virtual link and link attribute information is obtained by selecting, by the computing power egress gateway, a service instance for a specific service identifier according to a local service routing surface;
the diffusing module 840 is configured to diffuse the computing power information advertised by the cloud side to the computing power entry gateway.
In an embodiment, the calculation quantity index includes a network delay obtained by converting a calculation delay, a network cost obtained by converting a resource cost, an interface bandwidth obtained by converting a total amount of resources, and an interface bandwidth occupation obtained by converting a resource occupation;
The device also comprises a local generation module which is used for generating a local service routing table, wherein the local service routing table comprises a virtual route forwarding VRF identifier, a service instance identifier and corresponding calculation quantity information.
In one embodiment, the installation module is configured to perform intra-domain flooding by IGP according to the local service route, taking the IP corresponding to the service identifier as a virtual node, and taking the calculated metric information of the corresponding entry as the link attribute of the virtual link, so as to implement intra-domain flooding of the virtual link and the link attribute information.
In an embodiment, the computing power information does not include information specifying a service instance and a preference entry corresponding to the service identification;
The computing power information is determined whether to announce and withdraw by the computing power outlet gateway according to whether a corresponding preferred item exists locally or not;
the calculation force information takes a virtual route forwarding identifier and a service identifier as key indexes;
and the computing power information is diffused from the computing power outlet gateway to the computing power gateway inlet, and is uniformly processed by the computing power inlet gateway.
In an embodiment, the device further comprises a forwarding module configured to search a local service routing table according to the service identifier in the service request message, so as to determine a corresponding service instance, and forward the message to the corresponding service instance.
The network computing scheduling device provided in this embodiment and the network computing scheduling method provided in the foregoing embodiments belong to the same inventive concept, and technical details not described in detail in this embodiment can be seen in any of the foregoing embodiments, and this embodiment has the same advantages as those of executing the network computing scheduling method.
The embodiment of the application also provides a computer network brain, fig. 16 is a schematic diagram of a hardware structure of the computer network brain provided by the embodiment, and as shown in fig. 16, the computer network brain provided by the application comprises a processor 11 and a memory 12, wherein the processor 11 in the computer network brain can be one or more, one processor 11 is taken as an example in fig. 16, the memory 12 is configured to store one or more programs, and the one or more programs are executed by the one or more processors 11, so that the one or more processors 11 implement the computer network scheduling method according to the embodiment of the application.
The computer network brain further comprises communication means 13, input means 14 and output means 15.
The processor 11, memory 12, communication device 13, input device 14 and output device 15 in the computer network brain may be connected by a bus or other means, for example by a bus connection in fig. 16.
The input device 14 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the computer network brain. The output means 15 may comprise a display device such as a display screen.
The communication device 13 may include a receiver and a transmitter. The communication means 13 is arranged to perform information transceiving communication according to control of the processor 11.
The memory 12 is configured as a computer readable storage medium, and may be configured to store a software program, a computer executable program, and a module, and program instructions/modules corresponding to the network scheduling method according to the embodiment of the present application (for example, the service registration module 510, the issuing module 520, and the calculating module 530 in the network scheduling device). The memory 12 may include a storage program area that may store an operating system, applications required for at least one function, and a storage data area that may store data created according to the use of the computer network brain, etc. In addition, memory 12 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 12 may further include memory located remotely from processor 11, which may be connected to the computing network brain through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiment of the application also provides a cloud side node, and fig. 17 is a schematic hardware structure diagram of the cloud side node provided by the embodiment, as shown in fig. 17, the cloud side node provided by the application comprises a processor 21 and a memory 22, wherein the number of the processors 21 in the cloud side node can be one or more, in fig. 17, one processor 21 is taken as an example, the memory 22 is configured to store one or more programs, and the one or more programs are executed by the one or more processors 21, so that the one or more processors 21 implement the network computing scheduling method according to the embodiment of the application.
The cloud-side node further comprises communication means 23, input means 24 and output means 25.
The processor 21, the memory 22, the communication means 23, the input means 24 and the output means 25 in the cloud-side node may be connected by a bus or other means, in fig. 17 by way of example.
The input device 24 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the cloud-side node. The output means 25 may comprise a display device such as a display screen.
The communication device 23 may include a receiver and a transmitter. The communication device 23 is configured to perform information transmission and reception communication according to control of the processor 21.
The memory 22 is configured as a computer readable storage medium, and may be configured to store a software program, a computer executable program, and modules corresponding to the network scheduling method according to the embodiment of the present application (for example, the request module 610, the acquisition module 620, and the notification module 630 in the network scheduling apparatus). The memory 22 may include a storage program area that may store an operating system, application programs required for at least one function, and a storage data area that may store data created according to the use of cloud-side nodes, etc. In addition, the memory 22 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, the memory 22 may further comprise memory remotely located with respect to the processor 21, which may be connected to the cloud-side node via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiment of the application also provides a computing force portal gateway, and fig. 18 is a schematic hardware structure of the computing force portal gateway provided by the embodiment, as shown in fig. 18, and the computing force portal gateway provided by the application comprises a processor 31 and a memory 32, wherein the processor 31 in the computing force portal gateway can be one or more, in fig. 18, one processor 31 is taken as an example, the memory 32 is configured to store one or more programs, and the one or more programs are executed by the one or more processors 31, so that the one or more processors 31 implement the computing network scheduling method according to the embodiment of the application.
The computing portal gateway further comprises communication means 33, input means 34 and output means 35.
The processor 31, memory 32, communication means 33, input means 34 and output means 35 in the computing portal gateway may be connected by bus or other means, in fig. 18 by way of example.
The input device 34 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the computing force entry gateway. The output means 35 may comprise a display device such as a display screen.
The communication device 33 may include a receiver and a transmitter. The communication means 33 is arranged to perform information transceiving communication according to control of the processor 31.
The memory 32, which is a computer readable storage medium, may be configured to store a software program, a computer executable program, and modules, corresponding to the network scheduling method according to the embodiment of the present application, for example, the acquiring module 710, the determining module 720, and the forwarding module 730 in the network scheduling device. The memory 32 may include a storage program area that may store an operating system, applications required for at least one function, and a storage data area that may store data created according to the use of the computing force ingress gateway, etc. In addition, memory 32 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 32 may further include memory remotely located with respect to processor 31, which may be connected to the computing portal gateway through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiment of the application also provides a computing power outlet gateway, and fig. 19 is a schematic hardware structure of the computing power outlet gateway provided by the embodiment, as shown in fig. 19, and the computing power outlet gateway provided by the application comprises a processor 41 and a memory 42, wherein the processor 41 in the computing power outlet gateway can be one or more, in fig. 19, one processor 41 is taken as an example, the memory 42 is configured to store one or more programs, and the one or more programs are executed by the one or more processors 41, so that the one or more processors 41 implement the computing network scheduling method according to the embodiment of the application.
The computing force outlet gateway further comprises communication means 43, input means 44 and output means 45.
The processor 41, memory 42, communication means 43, input means 44 and output means 45 in the computing force outlet gateway may be connected by a bus or other means, in fig. 19 by way of example.
The input device 44 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the computing force outlet gateway. The output means 45 may comprise a display device such as a display screen.
The communication device 43 may include a receiver and a transmitter. The communication device 43 is provided to perform information transmission and reception communication according to the control of the processor 41.
The memory 42, which is a computer readable storage medium, may be configured to store a software program, a computer executable program, and modules corresponding to the algorithm scheduling method according to the embodiment of the present application (for example, the diffusion module 810, the acquisition module 820, the determination module 830, and the forwarding module 840 in the algorithm scheduling device). The memory 42 may include a storage program area that may store an operating system, applications required for at least one function, and a storage data area that may store data created according to the use of the computing power egress gateway, etc. In addition, memory 42 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 42 may further include memory remotely located with respect to processor 41, which may be connected to the computing force egress gateway through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Fig. 20 is a schematic structural diagram of a computing network scheduling system according to an embodiment. The embodiment of the application also provides a computing network dispatching system, which comprises a network domain node 910 and the cloud side node 920 according to any embodiment, wherein the network domain node 910 comprises a computing network brain 911, a packet network device 912, a computing power entry gateway 913 according to any embodiment and a computing power exit gateway 914 according to any embodiment.
The computing network brain 911 generates a service identifier according to the service registration request of the cloud side node 920, and issues the service identifier and corresponding service level agreement SLA information to the computing gateway, and may also calculate a traffic engineering calculation path according to the computing metric, the computing information advertised by the computing gateway and the SLA information in a centralized calculation scenario, match the traffic engineering calculation path with a global service routing table to obtain a global service route, and issue the global service route to the computing gateway 913. The cloud side node 920 sends a service registration request to the network domain node 910 according to the service type of the nano tube, collects cloud side resource status information if the service registration is successful, notifies the power outlet gateway 914 of the power information according to the resource status information, and the power information is diffused upstream by the power outlet gateway 914 through the packet network device 912 and guides the traffic to the service instance by the power inlet gateway 913 and the power outlet gateway 914.
The computing force entry gateway 913 acquires the computing force information advertised by the computing force exit gateway, the service identifier issued by the computing network brain 910 and the corresponding service level agreement SLA information, and in the distributed computing scene, the computing force entry gateway 913 can locally calculate the flow engineering computing path according to the computing force index, the computing force information advertised by the computing force gateway and the SLA information, match the flow engineering computing path with the global service routing table to obtain the global service route and finish issuing. The service request message is forwarded to the corresponding computing power egress gateway 914 according to the global service route.
The computing force outlet gateway 914 obtains the resource state of the service instance, determines the computing force measurement index and the local service route according to the resource state, installs the virtual link and the link attribute information corresponding to the virtual node, wherein the virtual link and the link attribute information are obtained by the computing force outlet gateway selecting the service instance according to the local service route surface and the specific service identifier, and diffuses the computing force information advertised by the cloud side to the computing force inlet gateway 913.
In this embodiment, the computing network brain 911 may perform centralized computation, or the computing power gateway 913 may perform distributed head node computation. The service identification is used for distinguishing different types of services, the SLA information corresponding to each service can be flexibly set, the scheduling requirements of the integrated multi-factor computing network can be met, the scheduling targets of experience class, cost class and resource class scenes can be met, the resource consumption and investment of equipment can be reduced by integrating computing power perception and computing network scheduling, the pressure of a control plane and a forwarding plane caused by the large-scale computing of computing power information is solved, and the system has good floor and expansibility. By uniformly issuing SAN-ID and mapped SLA information to the computing gateway, a foundation is provided for realizing computing network joint TE and computing routing preference.
The embodiment of the application also provides a storage medium, wherein the storage medium stores a computer program, and the computer program realizes the algorithm network scheduling method according to any one of the embodiments of the application when being executed by a processor. The method comprises the steps of generating a service identifier according to a service registration request, determining a calculation force index according to a resource state of a service instance corresponding to the service identifier, transmitting the service identifier and service level agreement SLA information accessed by a corresponding user to a calculation force gateway, and calculating a calculation force route according to the calculation force index, the calculation force information announced by the calculation force gateway and the SLA information.
Or the method comprises the steps of sending a service registration request to a network domain node according to the service type of the nano tube, collecting the resource state of a service instance corresponding to the service identifier under the condition that the service registration is successful, and informing the computing power information to an upstream gateway according to the resource state, wherein the computing power information is diffused to the upstream gateway by the computing power outlet gateway.
Or the method comprises the steps of obtaining the computing power information advertised by the computing power export gateway, the service identification issued by the network domain node and the service level agreement SLA information accessed by the corresponding user, calculating the computing power route corresponding to the specific service identification according to the computing power quantity index, the computing power information and the SLA information, and forwarding the service request message to the corresponding computing power export gateway according to the computing power route.
Or the method comprises the steps of obtaining the resource state of a service instance, determining a calculation effort index and a local service route according to the resource state, installing virtual links and link attribute information corresponding to virtual nodes, wherein the virtual links and the link attribute information are obtained by a calculation effort outlet gateway through selecting the service instance according to a specific service identifier by a local service route surface, and diffusing calculation effort information advertised by a cloud side to a calculation effort inlet gateway.
The computer storage media of embodiments of the application may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (Random Access Memory, RAM), a Read Only Memory (ROM), an erasable programmable Read Only Memory (Erasable Programmable Read Only Memory, EPROM), a flash Memory, an optical fiber, a portable CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. A computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The foregoing description is only exemplary embodiments of the application and is not intended to limit the scope of the application.
It will be appreciated by those skilled in the art that the term user terminal encompasses any suitable type of wireless user equipment, such as a mobile telephone, a portable data processing device, a portable web browser or a car mobile station.
In general, the various embodiments of the application may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the application is not limited thereto.
Embodiments of the application may be implemented by a data processor of a mobile device executing computer program instructions, e.g. in a processor entity, either in hardware, or in a combination of software and hardware. The computer program instructions may be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages.
The block diagrams of any of the logic flows in the figures of this application may represent program steps, or may represent interconnected logic circuits, modules, and functions, or may represent a combination of program steps and logic circuits, modules, and functions. The computer program may be stored on a memory. The Memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as, but not limited to, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), optical Memory devices and systems (digital versatile Disk (Digital Video Disc, DVD) or Compact Disk (CD)), etc., the computer readable medium may comprise a non-transitory storage medium.
The foregoing detailed description of exemplary embodiments of the application has been provided by way of exemplary and non-limiting examples. Various modifications and adaptations to the above embodiments may become apparent to those skilled in the art without departing from the scope of the application, which is defined in the accompanying drawings and claims. Accordingly, the proper scope of the application is to be determined according to the claims.
Claims (23)
1. A computing network scheduling method applied to a network domain node, comprising:
Generating a service identifier according to the service registration request;
determining a calculation strength measurement index according to the resource state of the service instance corresponding to the service identifier;
the service identification and the service level agreement SLA information accessed by the corresponding user are issued to an algorithm gateway;
And calculating the computational effort route according to the computational effort index, the computational effort information advertised by the computational effort gateway and the SLA information.
2. The method of claim 1, wherein the service registration request includes a service domain name, a service IP and port, SLA information, and resource information.
3. The method of claim 1, wherein the metrics of the SLA information include latency, cost, bandwidth, jitter, and reliability;
the calculation quantity indexes comprise network time delay obtained by converting calculation time delay, network cost obtained by converting resource cost, interface bandwidth obtained by converting total resource quantity and interface bandwidth occupation obtained by converting resource occupation.
4. The method of claim 1, wherein the computational effort routes include global service routes and local service routes;
Calculating the computing power route according to the computing power index, the computing power information advertised by the computing power gateway and the SLA information, wherein the computing power route comprises the following steps:
Determining a traffic engineering calculation path from the computing force entry gateway to the virtual node mapped by the specific service identifier according to the computing force index, the computing force information advertised by the computing force gateway and the SLA information;
Matching the traffic engineering calculation path with a global service routing table to obtain the global service routing;
determining a service instance corresponding to the specific service identifier according to the resource state of the service instance to obtain the local service route;
Wherein the penultimate node of the traffic engineering computation path corresponds to a computing force egress gateway, and the end node of the traffic engineering computation path corresponds to a virtual node mapped by the specific service identifier.
5. The method of claim 4, wherein the virtual links and link attribute information corresponding to the virtual nodes are obtained by selecting the service instance by the computing power egress gateway for the specific service identifier according to a local service routing table, wherein the local service routing table comprises a virtual routing forwarding identifier, a service instance IP and corresponding computing power quantity information;
the virtual link and link attribute information are disseminated by an interior gateway protocol IGP protocol.
6. The method of claim 4, wherein the global service route comprises virtual private network information, service identification, service color, power outlet gateway IP, power outlet gateway segment identification, virtual private network segment identification;
the global service route is formed by diffusing the computing power information from the computing power outlet gateway to the computing power inlet gateway;
the bearer protocol for the global service routing information is based on BGP protocol family extensions.
7. The method of claim 6, wherein the computing power information does not include information of a service instance and a preference entry corresponding to the particular service identification;
the computing power information is used for deciding whether to announce and withdraw by the computing power outlet gateway according to whether a corresponding preferred item exists locally;
the calculation force information takes a virtual route forwarding identifier and a service identifier as key indexes;
And the computing power information is diffused from the computing power outlet gateway to the computing power inlet gateway and is uniformly processed by the computing power inlet gateway.
8. The computing network scheduling method is applied to cloud side nodes and is characterized by comprising the following steps of:
sending a service registration request to a network domain node according to the service type of the nano-tube;
acquiring the resource state of a service instance corresponding to the service identifier under the condition that the service registration is successful;
And notifying the computing power information to the computing power outlet gateway according to the resource state, wherein the computing power information is diffused to an upstream gateway by the computing power outlet gateway.
9. A computing network scheduling method applied to a computing power entry gateway, comprising:
Acquiring the power information advertised by the power outlet gateway, and service identification issued by the network domain node and service level agreement SLA information accessed by the corresponding user;
calculating a computing power route corresponding to a specific service identifier according to the computing power index, the computing power information and the SLA information;
And forwarding the service request message to a corresponding computing power outlet gateway according to the computing power route.
10. The method as recited in claim 9, further comprising:
Maintaining a global service routing table with a plurality of next hops of the specific service identifier according to the received computing power information of the plurality of computing power outlet gateways;
And constructing a traffic engineering model according to the SLA information corresponding to the specific service identifier and the corresponding virtual node, wherein the traffic engineering model comprises color information and a virtual node IP mapped by the service identifier.
11. The method of claim 10, wherein the computational effort routing comprises global service routing;
calculating the computing power route corresponding to the specific service identifier according to the computing power index, the computing power information and the SLA information, wherein the computing power route comprises the following steps:
Determining a traffic engineering calculation path from a computing force entry gateway to a virtual node mapped by a service identifier according to the computing force index, the computing force information and the SLA information;
Matching the traffic engineering calculation path with a global service routing table to obtain the global service routing;
wherein the penultimate node of the traffic engineering computation path corresponds to a computing force egress gateway, and the end node of the traffic engineering computation path corresponds to a virtual node mapped by a service identifier.
12. The method of claim 10, wherein the computing power information does not include service instance and preference entry information corresponding to the particular service identification;
the computing power information is used for deciding whether to announce and withdraw by the computing power outlet gateway according to whether a corresponding preferred item exists locally;
the calculation force information takes a virtual route forwarding identifier and a service identifier as key indexes;
And the computing power information is diffused from the computing power outlet gateway to the direct computing power outlet gateway and is uniformly processed by the computing power inlet gateway.
13. A computing network scheduling method applied to a computing power outlet gateway, comprising the following steps:
acquiring a resource state of a service instance;
determining a calculation vector index and a local service route according to the resource state;
Installing virtual links and link attribute information corresponding to the virtual nodes, wherein the virtual links and link attribute information is obtained by selecting a service instance for a specific service identifier according to a local service routing surface by a computing power outlet gateway;
And diffusing the computing power information advertised by the cloud side to a computing power entry gateway.
14. The method of claim 13, wherein the calculation metric comprises a network delay resulting from calculation delay conversion, a network cost resulting from resource cost conversion, an interface bandwidth resulting from conversion of a total amount of resources, and an interface bandwidth occupation resulting from conversion of a resource occupation;
the method further comprises the steps of:
A local service routing table is generated, the local service routing table containing virtual route forwarding VRF identifications, service instance identifications, and corresponding computation effort information.
15. The method of claim 13, wherein installing virtual links and link attribute information corresponding to virtual nodes comprises:
According to the local service route, the IP corresponding to the service identifier is used as a virtual node, the calculation vector information of the corresponding item is used as the link attribute of the virtual link, and the IGP performs intra-domain flooding to realize intra-domain diffusion of the virtual link and the link attribute information.
16. The method of claim 13, wherein the computing power information does not include information specifying a service instance and a preference entry to which the service identification corresponds;
The computing power information is determined whether to announce and withdraw by the computing power outlet gateway according to whether a corresponding preferred item exists locally or not;
the calculation force information takes a virtual route forwarding identifier and a service identifier as key indexes;
and the computing power information is diffused from the computing power outlet gateway to the computing power gateway inlet, and is uniformly processed by the computing power inlet gateway.
17. The method as recited in claim 13, further comprising:
And searching a local service routing table according to the service identifier in the service request message to determine a corresponding service instance, and forwarding the message to the corresponding service instance.
18. A network domain node comprising a memory, and one or more processors;
the memory is configured to store one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the algorithm scheduling method of any of claims 1-7.
19. The cloud side node is characterized by comprising a memory and one or more processors;
the memory is configured to store one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the computing network schedule of claim 8.
20. A computing force ingress gateway comprising a memory, and one or more processors;
the memory is configured to store one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the algorithm scheduling method of any of claims 9-12.
21. A computing force egress gateway comprising a memory, and one or more processors;
the memory is configured to store one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the algorithm scheduling method of any one of claims 13-17.
22. The computing network scheduling system is characterized by comprising a network domain node and the cloud side node according to claim 18;
the network domain node comprises a computing network brain, a packet network device, a computing force ingress gateway as claimed in claim 19 and a computing force egress gateway as claimed in claim 20.
23. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the computational network scheduling method of any one of claims 1-17.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310835746.3A CN119276861A (en) | 2023-07-07 | 2023-07-07 | Computing network scheduling method, network domain, cloud side node, computing power gateway, system and medium |
| PCT/CN2024/091321 WO2025011150A1 (en) | 2023-07-07 | 2024-05-07 | Computing power network scheduling method, network domain node, cloud side node, computing power gateway and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310835746.3A CN119276861A (en) | 2023-07-07 | 2023-07-07 | Computing network scheduling method, network domain, cloud side node, computing power gateway, system and medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN119276861A true CN119276861A (en) | 2025-01-07 |
Family
ID=94106213
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310835746.3A Pending CN119276861A (en) | 2023-07-07 | 2023-07-07 | Computing network scheduling method, network domain, cloud side node, computing power gateway, system and medium |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN119276861A (en) |
| WO (1) | WO2025011150A1 (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120321176A (en) * | 2025-06-19 | 2025-07-15 | 宁波银行股份有限公司 | Routing information processing method, system, electronic device and storage medium |
| CN120602492A (en) * | 2025-08-04 | 2025-09-05 | 中国铁塔股份有限公司 | A method and system for collecting and managing network equipment computing power information |
| CN120811772A (en) * | 2025-09-09 | 2025-10-17 | 江苏未来网络集团有限公司 | Data circulation method and system based on high-speed data network |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN121262083A (en) * | 2025-12-04 | 2026-01-02 | 北京九章云极科技有限公司 | Configuration method and device for computing power routing gateway of intelligent computing center cloud platform |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200028758A1 (en) * | 2018-07-17 | 2020-01-23 | Cisco Technology, Inc. | Multi-cloud connectivity using srv6 and bgp |
| CN115225722A (en) * | 2021-04-20 | 2022-10-21 | 中兴通讯股份有限公司 | Computing resource notification method and device, storage medium and electronic device |
| CN114980250B (en) * | 2022-04-27 | 2024-08-13 | 山东浪潮科学研究院有限公司 | Computing power routing system and method based on SRv6 |
| CN115695281B (en) * | 2022-10-26 | 2025-06-17 | 北京星网锐捷网络技术有限公司 | A node scheduling method, device, equipment and medium for computing power network |
-
2023
- 2023-07-07 CN CN202310835746.3A patent/CN119276861A/en active Pending
-
2024
- 2024-05-07 WO PCT/CN2024/091321 patent/WO2025011150A1/en active Pending
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120321176A (en) * | 2025-06-19 | 2025-07-15 | 宁波银行股份有限公司 | Routing information processing method, system, electronic device and storage medium |
| CN120602492A (en) * | 2025-08-04 | 2025-09-05 | 中国铁塔股份有限公司 | A method and system for collecting and managing network equipment computing power information |
| CN120602492B (en) * | 2025-08-04 | 2025-11-07 | 中国铁塔股份有限公司 | A method and system for collecting and managing computing power information of network devices |
| CN120811772A (en) * | 2025-09-09 | 2025-10-17 | 江苏未来网络集团有限公司 | Data circulation method and system based on high-speed data network |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2025011150A1 (en) | 2025-01-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111510387B (en) | Data forwarding method and related device | |
| US12368669B2 (en) | Packet sending method, device, and system | |
| CN109639557B (en) | Method, device and system for network communication | |
| CN119276861A (en) | Computing network scheduling method, network domain, cloud side node, computing power gateway, system and medium | |
| US9634928B2 (en) | Mesh network of simple nodes with centralized control | |
| US10484203B2 (en) | Method for implementing communication between NVO3 network and MPLS network, and apparatus | |
| CN111865796B (en) | Path Computation Element Central Controller (PCECC) for network services | |
| JP4997196B2 (en) | Communication network system, path calculation device, and communication path establishment control method | |
| CN104780099A (en) | Dynamic end-to-end network path setup across multiple network layers with network service chaining | |
| WO2019134639A1 (en) | Method and apparatus for implementing optimal seamless cross-domain path, device and storage medium | |
| EP3682597B1 (en) | Modeling access networks as trees in software-defined network controllers | |
| EP3297245B1 (en) | Method, apparatus and system for collecting access control list | |
| US9674072B1 (en) | Route topology discovery in data networks | |
| WO2015035616A1 (en) | Method and device for cross-network communications | |
| US12401584B2 (en) | Underlay path discovery for a wide area network | |
| CN114900455B (en) | Message transmission method, system, equipment and storage medium | |
| CN104168194B (en) | Cluster network controlling of path thereof, equipment and cluster network system | |
| US20190199577A1 (en) | Oss dispatcher for policy-based customer request management | |
| WO2017000858A1 (en) | Network element device and method for opening data communication network | |
| US8750166B2 (en) | Route topology discovery in data networks | |
| CN114531392B (en) | Multicast service design method, server and storage medium | |
| CN113904981A (en) | Routing information processing method and device, electronic equipment and storage medium | |
| CN105684362A (en) | Interworking between first protocol entity of stream reservation protocol and second protocol entity of routing protocol | |
| US20240179573A1 (en) | Information transmission method, network node, controller, and storage medium | |
| CN117527692A (en) | Computing power notification and routing methods, electronic devices and storage media in the computing power network |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication |