CN112738149B - Data transmission system and method - Google Patents
Data transmission system and method Download PDFInfo
- Publication number
- CN112738149B CN112738149B CN201911037552.9A CN201911037552A CN112738149B CN 112738149 B CN112738149 B CN 112738149B CN 201911037552 A CN201911037552 A CN 201911037552A CN 112738149 B CN112738149 B CN 112738149B
- Authority
- CN
- China
- Prior art keywords
- node
- data
- root node
- edge
- edge nodes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000005540 biological transmission Effects 0.000 title claims abstract description 62
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000009826 distribution Methods 0.000 claims abstract description 6
- 238000004590 computer program Methods 0.000 claims description 14
- 230000015654 memory Effects 0.000 claims description 10
- 238000003860 storage Methods 0.000 claims description 9
- 238000010586 diagram Methods 0.000 description 12
- 238000007726 management method Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010992 reflux Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Small-Scale Networks (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The present disclosure relates to a data transmission system and method, the system being applied to a content distribution system, comprising at least two edge nodes, wherein the at least two edge nodes prefetch the same data; wherein at least one of the at least two edge nodes serves as a root node; wherein the root node acquires the data from a source station or an upper node, and edge nodes other than the root node acquire the data from the root node.
Description
Technical Field
The present disclosure relates to the field of information processing, and in particular, to a data transmission system and method.
Background
Prefetch/warm-up is an important basic function in content delivery networks (Content Delivery Network, CDN). A user submits a resource to be preheated through a prefetching system, wherein the resource is usually expressed in a URL form, and a designated CDN node is requested to cache the resource in advance, so that the purposes of improving access effect and reducing pressure of an upper node/source station are achieved.
In the related art, CDN architecture is hierarchical, generally consisting of a large number of edge nodes throughout the region and fewer upper nodes deployed in the core region. The edge node directly serves the visitor to the network, and its quality of service is the final manifestation of the CDN quality of service. Intuitively, the prefetching system can be used as a function of optimizing quality and relieving upper layer pressure, and the prefetching system can obtain better effect by prefetching more needed resources to all edge nodes. However, the direct backtracking/source-returning of a large number of edge nodes may cause huge performance pressure and bandwidth pressure on the upper node and the source station, generate huge cost overhead, and even affect the service quality.
Disclosure of Invention
To overcome any problems in the related art, a data transmission system and method are provided herein.
According to an aspect herein, there is provided a data transmission system, for use in a content distribution system, comprising at least two edge nodes, wherein the at least two edge nodes prefetch the same data; wherein at least one of the at least two edge nodes serves as a root node; wherein the root node acquires the data from a source station or an upper node, and edge nodes other than the root node acquire the data from the root node.
In an exemplary embodiment, the obtaining the data from the root node by the edge nodes other than the root node includes at least one of:
the other edge nodes except the root node directly acquire the data from the root node;
after the data obtained by one of the other edge nodes except the root node, the other edge node obtains the data from the edge node of the obtained data.
In an exemplary embodiment, the edge nodes other than the root node are configured to acquire transmission path information required for the data acquisition, and acquire the data according to the transmission path.
In an exemplary embodiment, the system further comprises:
and the management node is used for arranging the at least two edge nodes according to the tree structure by taking the root node as the tree structure, determining transmission path information required by each edge node except the root node to acquire the data according to the tree structure, and sending the transmission path information to the other edge nodes except the root node.
In an exemplary embodiment, the management node is specifically configured to determine, according to the tree structure, that the other edge nodes except the root node use the root node as a source to acquire the transmission path information of the data layer by layer.
In an exemplary embodiment, the management node is configured to determine, after detecting that the root node acquires data, an edge node located at a second layer in the tree structure, and determine, after detecting that the edge node located at the second layer in the tree structure acquires the data, an edge node located at a third layer in the tree structure, and so on, until detecting that the edge node located at the lowest layer in the tree structure acquires the data.
In an exemplary embodiment, each of the edge nodes other than the root node is a parent node or a leaf node, wherein:
the parent node is used for acquiring the data from the root node or one parent node and sending the data to the leaf node or the other parent node;
the leaf node is used for acquiring the data from the root node or the parent node.
In an exemplary embodiment, the root node is determined from a resource headroom of the at least two edge nodes.
In an exemplary embodiment, the root node is obtained by:
acquiring the free quantity information of each resource of each edge node;
calculating the resource empty space information of each edge node according to the empty space of each resource and the preset coefficient information of each resource;
and selecting an edge node of which the resource vacancy information meets the judging condition of sufficient resource vacancies as a root node.
In an exemplary embodiment, the parent node and the leaf node are obtained by:
selecting a part of nodes from other edge nodes except the root node as parent nodes according to the resource empty quantity information of the edge nodes, wherein:
The higher the resource margin of the node, the higher the probability of being selected as the parent node;
the lower the resource headroom of a node, the lower the probability of being selected as a parent node.
In an exemplary embodiment, the resource margin of the edge node in the ith layer in the tree structure is higher than the resource margin of the (i+1) th layer, where i is a positive integer.
According to another aspect herein, there is provided a data transmission method applied to a content distribution system, including:
determining at least two edge nodes that prefetch the same data;
from at least one of the at least two edge nodes as a root node;
controlling the root node to acquire the data from a source station or an upper node; and controlling other edge nodes except the root node to acquire the data from the root node.
In an exemplary embodiment, the controlling the other edge nodes than the root node to obtain the data from the root node includes at least one of:
controlling the other edge nodes except the root node to directly acquire the data from the root node;
and after the data obtained by one edge node of the other edge nodes except the root node, controlling the other edge node to obtain the data from the edge node of the obtained data.
In an exemplary embodiment, the controlling the edge nodes other than the root node to obtain the data from the root node includes:
transmitting transmission path information required for directly or indirectly acquiring the data from the root node to the other edge nodes except the root node, and indicating the other edge nodes except the root node to acquire the data according to the transmission paths.
In an exemplary embodiment, the transmission path information is obtained by including:
and arranging the at least two edge nodes according to a tree structure by taking the root node as a root node of the tree structure, and determining transmission path information required by each edge node except the root node to acquire the data according to the tree structure.
In an exemplary embodiment, the determining, according to the tree structure, transmission path information required for each edge node except a root node to acquire the data includes:
and determining other edge nodes except the root node according to the tree structure, and taking the root node as a source to acquire the transmission path information of the data layer by layer.
In an exemplary embodiment, the transmitting transmission path information required for directly or indirectly acquiring the data from the root node to the other edge nodes except the root node includes:
after detecting that the root node acquires the data, determining edge nodes positioned at a second layer in the tree structure, after detecting that the edge nodes positioned at the second layer in the tree structure acquire the data, determining edge nodes positioned at a third layer in the tree structure, and the like until detecting that the edge nodes at the bottommost layer in the tree structure acquire the data.
In an exemplary embodiment, the controlling the edge nodes other than the root node to obtain the data from the root node includes:
setting each edge node in the other edge nodes except the root node as a parent node or a leaf node, wherein:
the parent node obtains the data from a root node or one parent node and sends the data to a leaf node or the other parent node;
the leaf node obtains the data from a root node or a parent node.
In an exemplary embodiment, the root node is determined from a resource headroom of the at least two edge nodes.
In an exemplary embodiment, the root node is obtained by:
acquiring the free quantity information of each resource of each edge node;
calculating the resource margin of each edge node according to the margin of each resource and the preset coefficient information of each resource;
and selecting an edge node with the resource vacancy meeting the judging condition of sufficient resource vacancy as a root node.
In an exemplary embodiment, the parent node and the leaf node are obtained by:
selecting a part of nodes from other edge nodes except the root node as parent nodes according to the resource empty quantity of the edge nodes, wherein:
the higher the resource margin of the node, the higher the probability of being selected as the parent node;
the lower the resource headroom of a node, the lower the probability of being selected as a parent node.
In an exemplary embodiment, the resource redundancy margin of the edge node in the i-th layer is higher than the resource redundancy margin of the edge node in the i+1-th layer in the tree structure, where i is a positive integer.
According to another aspect herein, there is provided a computer storage medium having stored thereon a computer program which when executed performs the steps of the method of any of the above.
According to another aspect herein, there is provided a computer device comprising a processor, a memory and a computer program stored on the memory, the processor implementing the steps of any one of the methods above when executing the computer program.
The solution provided by the exemplary embodiment herein obtains at least two edge nodes, where the at least two edge nodes prefetch the same data, take at least one edge node of the at least two edge nodes as a root node, and control the root node to obtain the data from an upper node; and controlling other edge nodes except the root node to acquire the data from the root node, so that the execution times of all direct source return operations of the edge nodes are reduced, and the pressure and the cost of upper nodes are reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the disclosure, and do not constitute a limitation on the disclosure. In the drawings:
Fig. 1 is a flow chart illustrating a data transmission method according to an exemplary embodiment.
Fig. 2 is a schematic diagram illustrating a data transmission method according to an exemplary embodiment.
FIG. 3 is a schematic diagram of a random tree shown according to an exemplary embodiment.
Fig. 4 is a flow chart illustrating a method of using a random tree according to an exemplary embodiment.
FIG. 5 is a block diagram of a computer device, according to an example embodiment.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments herein more apparent, the technical solutions in the embodiments herein will be clearly and completely described below with reference to the accompanying drawings in the embodiments herein, and it is apparent that the described embodiments are some, but not all, embodiments herein. All other embodiments, based on the embodiments herein, which a person of ordinary skill in the art would obtain without undue burden, are within the scope of protection herein. It should be noted that, without conflict, the embodiments and features of the embodiments herein may be arbitrarily combined with each other.
Aiming at the problems encountered when the edge prefetching is carried out in the related technology, the inventor analyzes the reasons of the problems, and the contents are as follows:
1. When the edge nodes execute the prefetching operation, a large number of edge nodes need to trace back to the upper node to acquire resources, and because the operation of executing direct source returning is excessive, huge processing pressure is brought to the upper node, and huge resource expenditure is brought to the system;
2. as the number and the scale of the edge nodes become larger and larger, the pressure and the cost of the edge nodes for performing the prefetching operation on the upper layer are also gradually increased, and the difficulty of the edge prefetching is increased along with the increase of the number of the edge nodes;
3. the resource vacancies obtained by different edge nodes have larger differences, such as node bandwidths, machine performances and the like, the nodes with more resource vacancies are difficult to fully utilize, and the nodes with less vacations are easily crushed by the prefetching task and cannot normally execute services.
Based on the above analysis, the present embodiments provide the following solutions:
exemplary embodiments herein provide a data transmission system comprising at least two edge nodes, wherein the at least two edge nodes prefetch the same data; wherein at least one of the at least two edge nodes serves as a root node; wherein the root node acquires the data from a source station or an upper node, and edge nodes other than the root node acquire the data from the root node.
Edge nodes, upper level nodes (also called parent nodes, parent layers) in the CDN herein are specific entities that exist, and the other is a node in the adopted tree graph concept that is an abstract concept describing relationships and structures, namely the root node mentioned above.
If the data prefetched by the root node is stored in the source station, acquiring the data from the source station; if the source station does not store the data, the data may be retrieved from an upper node.
In an exemplary embodiment, an edge node having a prefetch operation on the same data may be determined according to URLs carried in a warm-up request sent by the edge node, if information of URLs sent by at least two edge nodes are the same, the data to be acquired by the at least two edge nodes are the same, and an edge node having a prefetch requirement on the same data is taken as a prefetch task, where edge nodes in the prefetch task are taken as edge groups.
In an exemplary embodiment, at least one edge node is selected from the at least two edge nodes as a root node to execute the operation of direct source return, so that the processing pressure on the upper node is effectively reduced; meanwhile, the root node acquires the required data, and other edge nodes except the root node acquire the data from the root node, so that the at least two edge nodes are ensured to acquire the required data.
In an exemplary embodiment, the edge nodes except the root node directly acquire data from the root node, and the edge nodes can be directly connected with the root node to acquire data from the root node; the other edge nodes except the root node indirectly acquire data from the root node, and the other edge nodes except the root node can be divided into multiple layers, the edge node at the second layer is connected with the root node, the other edge nodes at the n layer are directly connected with the edge node at the n-1 layer and are not directly connected with the root node any more, and the data are acquired from the root node by means of the transmission of the data by the other edge nodes.
After the root node acquires the data, if other edge nodes except the root node directly acquire the data from the root node, the edge node which is taken as the root node can generate the problems of overlarge pressure and overhead of the upper node, and in order to solve the problems, the edge node of the acquired data is taken as a data providing party, so that the pressure of the root node is greatly shared; meanwhile, along with the increasing of edge nodes of the obtained data, the diffusion speed of the data is gradually increased, and the data acquisition efficiency is effectively improved.
Specifically, the method comprises the following steps:
in an exemplary embodiment, the obtaining the data from the root node by the edge nodes other than the root node includes at least one of:
the other edge nodes except the root node directly acquire the data from the root node;
after the data obtained by one of the other edge nodes except the root node, the other edge node obtains the data from the edge node of the obtained data.
For example, the edge node 1, the edge node 2 and the edge node 3 all have the same data prefetching requirement, the edge node 1 is used as a root node, the edge node 2 is directly connected with the root node (the edge node 1), and the edge node 3 is connected with the root node (the edge node 1) through the edge node 2. After the root node (edge node 1) acquires the data, the edge node 2 acquires the data directly from the root node (edge node 1), and the edge node 3 acquires the data from the edge node 2, namely, indirectly from the root node (edge node 1).
In the above example, the edge node 3 acquires data from the edge node 2 instead of the root node (edge node 1), and the edge node 2 shares the processing pressure of the root node (edge node 1).
In an exemplary embodiment, the edge nodes other than the root node are configured to acquire transmission path information required for the data acquisition, and acquire the data according to the transmission path.
And the transmission path information is acquired, so that the data is transmitted among the edge nodes, and the edge nodes are ensured to successfully acquire the data.
In an exemplary embodiment, the system further comprises:
the management node is used for arranging the at least two edge nodes according to a tree structure by taking the root node as the root node of the tree structure, determining transmission path information required by each edge node except the root node to acquire the data according to the tree structure, and sending the transmission path information to the other edge nodes except the root node.
In an exemplary embodiment, the management node may be a prefetch center, uniformly manage the prefetch operation of the edge nodes, manage the edge nodes that prefetch the same data after obtaining the prefetch operation of the edge nodes, and generate the required transmission path information for the edge nodes.
In an exemplary embodiment, the node degree of the tree structure may be set according to actual needs, for example, the tree structure may be a binary tree, and the arrangement of the nodes adopts a weighted random manner, so that the arrangement of the edge nodes is conveniently utilized.
The management node is specifically configured to determine, according to the tree structure, that other edge nodes except for a root node use the root node as a source to acquire transmission path information of the data layer by layer.
In an exemplary embodiment, the management node is specifically configured to determine, according to the tree structure, that the other edge nodes except the root node use the root node as a source to acquire the transmission path information of the data layer by layer.
In the above exemplary embodiment, the edge nodes of the ith layer and the (i+1) th layer on the tree structure are obtained, connection information of the edge node of the (i+1) th layer and the edge node of the (i) th layer on the tree structure is determined, and according to the connection information, transmission path information of data obtained from the edge node of the (i+1) th layer to the edge node of the (i) th layer in the tree structure is determined, wherein i is an integer.
In the tree structure, the transmission path information of the data is acquired from the edge node of the i layer in the transmission path information of the edge node of the i+1 layer.
Because the data is only transferred from the edge nodes of the upper layer and the lower layer in the tree structure, the mutual data transmission between the edge nodes is realized, the data transmission efficiency is improved, the pressure of the data sent to each edge node is shared to a plurality of edge nodes, and the pressure and the cost to the root node are reduced.
In an exemplary embodiment, the management node is configured to determine, after detecting that the root node acquires data, an edge node located at a second layer in the tree structure, and determine, after detecting that the edge node located at the second layer in the tree structure acquires the data, an edge node located at a third layer in the tree structure, and so on, until detecting that the edge node located at the lowest layer in the tree structure acquires the data.
In one exemplary embodiment, the prefetch tasks are performed one by one, and a tree is generated for each prefetch task.
The management node does not need to generate the whole tree or save the tree structure just after the task starts, but can gradually generate the lower nodes from the root node while executing the task, so that the generated attribute structure does not have waiting phenomenon and has higher efficiency.
In an exemplary embodiment, each of the edge nodes other than the root node is a parent node or a leaf node, wherein:
the parent node is used for acquiring the data from the root node or one parent node and sending the data to the leaf node or the other parent node;
The leaf node is used for acquiring the data from the root node or the parent node.
In one exemplary embodiment, taking edge node 1, edge node 2, and edge node 3 in the above examples as examples, edge node 2 is a parent node and edge node 3 is a leaf node.
In an exemplary embodiment, the root node is determined from a resource headroom of the at least two edge nodes.
The root node is selected according to the resource blank amount of the edge node, so that the full utilization of the resources of the edge node can be ensured, and the reasonable utilization of the resources can be achieved.
In an exemplary embodiment, the root node is obtained by:
acquiring the free quantity information of each resource of each edge node;
calculating the resource margin of each edge node according to the margin of each resource and the preset coefficient information of each resource;
and selecting an edge node with the resource vacancy meeting the judging condition of sufficient resource vacancy as a root node.
In one exemplary embodiment, resource spare data for the edge node may be obtained, which may be node spare bandwidth, machine performance load, etc.; calculating and storing the resource empty quantity of each edge node; the computational expression used therein is as follows:
w=r1*k1+r2*k2+…+rN*kN;
Wherein w represents a resource empty amount, rN represents a resource N empty amount, kN represents a coefficient corresponding to the resource N, wherein N represents an N-th resource, and N is a positive integer.
In the above calculation expression, kN may be adjusted according to actual conditions.
In an exemplary embodiment, the parent node and the leaf node are obtained by:
selecting a part of nodes from other edge nodes except the root node as parent nodes according to the resource empty quantity of the edge nodes, wherein:
the higher the resource margin of the node, the higher the probability of being selected as the parent node;
the lower the resource headroom of a node, the lower the probability of being selected as a parent node.
When transmitting data, the parent node needs to acquire the data and output the data, so that the burden is high; the leaf nodes only need to acquire data and do not need to output data, so that the burden is small, the parent nodes and the leaf nodes are selected according to the resource empty quantity, the resources of the nodes with more resources can be fully utilized, the nodes with less resources can normally execute service, the nodes are not crushed by the prefetching task, and the reasonable utilization of the resources is realized.
In an exemplary embodiment, the resource redundancy margin of the edge node in the i-th layer is higher than the resource redundancy margin of the edge node in the i+1-th layer in the tree structure, where i is a positive integer.
In the process of establishing the tree structure, resource free quantity is used as the selected probability, one edge node is selected each time according to the arrangement mode of the tree result from top to bottom, then the next node is taken out of the rest edge nodes, and the cycle is repeated until all the edge nodes are sequentially taken out, so that the tree structure is formed, the nodes on the tree structure from top to bottom are sequentially root nodes, parent nodes and leaf nodes, and the full utilization of resources is realized.
In the system, the following technical effects are achieved:
when at least two edge nodes prefetch the same data, only the edge node serving as the root node backtracks the upper CDN node in a traditional mode, and other edge nodes directly or indirectly acquire the data from the root node, so that the number of the direct backsource operation is only the root node, the processing pressure brought to the upper layer is obviously reduced, and the stability of the system operation is ensured;
along with the increasing number and the increasing scale of the edge nodes, after the root node acquires the data, the prefetching operation among the edge nodes is realized by mutual transmission among the edge nodes, so that any new pressure and new expenditure are not generated on an upper layer, and meanwhile, the difficulty of acquiring the data is reduced by means of mutual transmission of the data among the edge nodes;
Determining the position of the edge node in the tree structure according to the resource margin of the edge node, wherein: the more free nodes have a greater probability of becoming root or parent nodes in the random tree, the more free nodes have a greater probability of becoming leaf nodes in the random tree. Because the root node and the parent node need to acquire resources and output the resources, the burden is large; the leaf nodes only need to acquire resources and do not need to output the resources, the burden is small, the resources of the nodes with more resources are fully utilized, the nodes with less resources can normally execute services, the nodes are not crushed by the prefetching task, and the reasonable utilization of the resources is realized.
Fig. 1 is a flow chart illustrating a data transmission method according to an exemplary embodiment. The method shown in fig. 1 is applied to a content distribution system, and comprises the following steps:
The method provided by the example embodiment herein obtains at least two edge nodes, wherein the at least two edge nodes prefetch the same data, take at least one edge node of the at least two edge nodes as a root node, and control the root node to obtain the data from an upper node; and controlling other edge nodes except the root node to acquire the data from the root node, so that the execution times of all direct source return operations of the edge nodes are reduced, and the pressure and the cost of upper nodes are reduced.
In an exemplary embodiment, the controlling the other edge nodes than the root node to obtain the data from the root node includes at least one of:
controlling the other edge nodes except the root node to directly acquire the data from the root node;
and after the data obtained by one edge node of the other edge nodes except the root node, controlling the other edge node to obtain the data from the edge node of the obtained data.
In an exemplary embodiment, the controlling the edge nodes other than the root node to obtain the data from the root node includes:
Transmitting transmission path information required for directly or indirectly acquiring the data from the root node to the other edge nodes except the root node, and indicating the other edge nodes except the root node to acquire the data according to the transmission paths.
In an exemplary embodiment, the transmission path information is obtained by including:
and arranging the at least two edge nodes according to a tree structure by taking the root node as the tree structure, and determining transmission path information required by each edge node except the root node to acquire the data according to the tree structure.
In an exemplary embodiment, the determining, according to the tree structure, transmission path information required for each edge node except a root node to acquire the data includes:
and determining other edge nodes except the root node according to the tree structure, and taking the root node as a source to acquire the transmission path information of the data layer by layer.
In an exemplary embodiment, the transmitting transmission path information required for directly or indirectly acquiring the data from the root node to the other edge nodes except the root node includes:
After detecting that the root node acquires the data, determining edge nodes positioned at a second layer in the tree structure, after detecting that the edge nodes positioned at the second layer in the tree structure acquire the data, determining edge nodes positioned at a third layer in the tree structure, and the like until detecting that the edge nodes at the bottommost layer in the tree structure acquire the data.
In an exemplary embodiment said controlling other edge nodes than the root node to obtain said data from said root node comprises:
setting each edge node in the other edge nodes except the root node as a parent node or a leaf node, wherein:
the parent node obtains the data from a root node or one parent node and sends the data to a leaf node or the other parent node;
the leaf node obtains the data from a root node or a parent node.
In an exemplary embodiment the root node is determined based on the resource headroom of the at least two edge nodes.
In an exemplary embodiment, the root node is obtained by:
Acquiring the free quantity information of each resource of each edge node;
calculating the resource margin of each edge node according to the margin of each resource and the preset coefficient information of each resource;
and selecting an edge node with the resource vacancy meeting the judging condition of sufficient resource vacancy as a root node.
In an exemplary embodiment, the parent node and the leaf node are obtained by:
and selecting a part of nodes from other edge nodes except the root node as parent nodes according to the resource empty allowance of the edge nodes, wherein the resource empty allowance of each selected node is higher than the resource empty allowance of unselected nodes.
In an exemplary embodiment, the resource margin of the edge node in the ith layer in the tree structure is higher than the resource margin of the (i+1) th layer, where i is a positive integer.
The method has the following technical effects:
the edge nodes return to the source mutually, so that the traditional source return path is changed, and the pressure and cost overhead of the upper node are reduced;
when the number of edge nodes needing prefetching is increased, the number of nodes which can be used for sharing the prefetching pressure is increased, so that the difficulty of edge prefetching is not influenced by the number of the edge nodes;
The pressure is distributed according to the resource vacancies of different edge nodes, the problem of the resource vacancies difference of different edge nodes is solved, the vacant resources are fully utilized, and the resource waste is reduced.
The method exemplarily provided herein is described below:
fig. 2 is a schematic diagram illustrating a data transmission method according to an exemplary embodiment. As shown in fig. 2, the data transmission method exemplarily proposed herein changes the conventional edge-back upper layer mechanism, so that the edges are mutually back to the source, i.e. the edge nodes mutually get each other. A weight table is dynamically generated through the resource vacancies of the edge nodes, wherein the weight table is used for indicating the resource vacancies of the edge nodes, then a random tree is generated according to the weight table, and the propagation path of each resource prefetching is determined based on the random tree. When an edge prefetching task is executed on a certain edge, the parent node of the node on the random tree to which the task belongs is used for carrying out source returning. The scheme realizes the effect of bearing most of prefetching pressure by the edge node, and the same resource needs to trace back the upper layer only once at least, thereby achieving the purposes of greatly reducing the pressure of the upper layer node and saving cost.
In implementing the above method, the edge node is required to transfer the resources according to the random tree of each resource. Firstly, only root nodes need to trace back upper CDN nodes in a traditional mode, and no matter how many edge nodes are, only the number of root nodes is controlled, so that the times that the upper CDN nodes are traced back can be controlled, namely, the pressure and the cost of the upper CDN nodes are reduced. Second, the weight of the edge node determines the probability that the edge node appears at different positions in the random tree, and the more the free nodes become root nodes or parent nodes in the random tree, the more the free nodes become leaf nodes in the random tree. Because the root node and the parent node need to acquire resources and output the resources, the burden is large; and the leaf node only needs to acquire the resource and does not need to output the resource, so that the burden is small. Therefore, the problem of the vacant difference of the resources of different edge nodes is solved, and the vacant resources are fully utilized.
In order to achieve the technical effects, the method comprises the following three key steps:
dynamically generating a weight table according to the redundancy of the edge node resources;
generating a random tree for each pre-fetching task according to the generated weight table;
data is transmitted using a random tree.
The following describes the three steps respectively, including:
1. the edge node weight table is generated by the following steps:
a) And obtaining an edge node list needing to be prefetched according to a prefetching scheme of the service to which the prefetching task belongs.
b) The prefetch system hub obtains resource free data of the edge node, typically node free bandwidth, machine performance load, etc., from other platforms.
c) Calculating and storing the weight of each edge node; wherein the calculation expression is as follows:
i.w=r1*k1+r2*k2+…+rN*kN;
wherein w represents a weight, rN represents a free amount of the resource N, kN represents a coefficient corresponding to the resource N, wherein N represents an N-th resource, and N is a positive integer.
In the above calculation expression, kN may be adjusted according to actual conditions.
2. The random tree of each prefetch task is generated by the following steps:
a) And taking the weight as probability, randomly selecting one node at a time, then continuing to take out the next node from the rest nodes in a mode of taking the weight as probability, and repeating the steps until all the nodes are sequentially taken out.
b) And taking the node 1 as a root node, and connecting other nodes according to the fork number n of the random tree to form subsequent child nodes.
FIG. 3 is a schematic diagram of a random tree shown according to an exemplary embodiment. As shown in fig. 3, at the second level of the random tree, node 2 and node 3 become children of node 1, while node 4 and node 5 become children of node 2, node 6 and node 7 become children … … of node 3, and so on, forming a classical binary tree structure.
3. The edge node uses a random tree in the following manner:
fig. 4 is a flow chart illustrating a method of using a random tree according to an exemplary embodiment. As shown in fig. 4, the method includes:
a) The prefetch center does not need to generate the entire tree or save the tree structure just after the task starts, but may gradually generate the nodes of the lower layer while executing the task from the root node. The method has the advantages that the new parent node is directly generated on the leaf node to complete the task, the waiting phenomenon can not occur, and the efficiency is higher.
b) Firstly, when a task starts, a root node is selected and issued to a corresponding node for prefetching.
c) After the root node prefetching is completed, a task result is reported to a prefetching center, n leaf nodes are selected from the rest nodes, and a prefetching task taking the root node as a source is issued to the group of leaf nodes.
d) And after any leaf node in the previous step finishes the task first and returns the task result to the prefetch center, if the node is still the rest node, taking the node as the parent node, and continuously taking out the new leaf node from the rest node to return the source to the node.
e) Repeating step d until all remaining nodes receive the prefetch task.
f) All nodes complete the task and report the prefetching result, and the prefetching task is completely ended.
In the method, the situation that all the reflux sources are borne by the upper node is changed, and the pressure and the cost of the upper node are greatly reduced; along with the increase of the edge nodes, the more the nodes are used for sharing the pressure, the difficulty of edge prefetching is not influenced by the number of the edge nodes; the edge node resource redundancy is used for distributing the edge mutual fetching pressure, so that the problem of edge resource difference in different areas is solved, the redundancy resources are fully utilized, and the resource waste is reduced.
The exemplary embodiments herein provide a computer storage medium having stored thereon a computer program which, when executed, implements the steps of any of the methods described above.
Fig. 5 is a block diagram of a computer device 500, according to an example embodiment. For example, the computer device 500 may be provided as a server. Referring to fig. 5, the computer apparatus 500 includes a processor 501, and the number of processors may be set to one or more as needed. The computer device 500 further comprises a memory 502 for storing instructions, such as application programs, executable by the processor 501. The number of the memories can be set to one or more according to the requirement. Which may store one or more applications. The processor 501 is configured to execute instructions to perform the above-described method.
It will be apparent to one of ordinary skill in the art that embodiments herein may be provided as a method, apparatus (device), or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
The description herein is with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments herein. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional identical elements in an article or apparatus that comprises the element.
While preferred embodiments herein have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all alterations and modifications as fall within the scope herein.
It will be apparent to those skilled in the art that various modifications and variations can be made herein without departing from the spirit and scope of the disclosure. Thus, given that such modifications and variations herein fall within the scope of the claims herein and their equivalents, such modifications and variations are intended to be included herein.
Claims (18)
1. A data transmission system for use in a content distribution system, comprising at least two edge nodes, wherein the at least two edge nodes prefetch the same data; wherein at least one of the at least two edge nodes serves as a root node; the root node acquires the data from a source station or an upper node, and other edge nodes except the root node acquire the data from the root node;
the edge nodes except the root node are used for acquiring transmission path information required for acquiring the data and acquiring the data according to the transmission path;
the management node is used for arranging the at least two edge nodes according to a tree structure by taking the root node as the tree structure, determining transmission path information required by each edge node except the root node to acquire the data according to the tree structure, and sending the transmission path information to the other edge nodes except the root node;
After detecting that the root node acquires the data, determining edge nodes positioned at a second layer in the tree structure, after detecting that the edge nodes positioned at the second layer in the tree structure acquire the data, determining edge nodes positioned at a third layer in the tree structure, and the like until detecting that the edge nodes at the bottommost layer in the tree structure acquire the data.
2. The system of claim 1, wherein the obtaining the data from the root node by the edge nodes other than the root node comprises at least one of:
the other edge nodes except the root node directly acquire the data from the root node;
after the data obtained by one of the other edge nodes except the root node, the other edge node obtains the data from the edge node of the obtained data.
3. The system according to claim 1, wherein:
the management node is specifically configured to determine, according to the tree structure, that other edge nodes except for a root node use the root node as a source to acquire transmission path information of the data layer by layer.
4. The system according to claim 1, wherein:
each of the other edge nodes except the root node is a parent node or a leaf node, wherein:
the parent node is used for acquiring the data from the root node or one parent node and sending the data to the leaf node or the other parent node;
the leaf node is used for acquiring the data from the root node or the parent node.
5. The system of claim 4, wherein the root node is determined based on a resource headroom of the at least two edge nodes.
6. The system of claim 5, wherein the root node is obtained by:
acquiring the free quantity information of each resource of each edge node;
calculating the resource empty space information of each edge node according to the empty space of each resource and the preset coefficient information of each resource;
and selecting an edge node of which the resource vacancy information meets the judging condition of sufficient resource vacancies as a root node.
7. The system of claim 6, wherein the parent node and the leaf node are obtained by:
Selecting a part of nodes from other edge nodes except the root node as parent nodes according to the resource empty quantity information of the edge nodes, wherein:
the higher the resource margin of the node, the higher the probability of being selected as the parent node;
the lower the resource headroom of a node, the lower the probability of being selected as a parent node.
8. The system of claim 5, wherein the resource headroom of the edge node in the i-th layer in the tree structure is higher than the resource headroom of the i+1-th layer, wherein i is a positive integer.
9. A data transmission method applied to a content distribution system, comprising:
determining at least two edge nodes that prefetch the same data;
from at least one of the at least two edge nodes as a root node;
controlling the root node to acquire the data from a source station or an upper node; and controlling other edge nodes except the root node to acquire the data from the root node;
the controlling the other edge nodes except the root node to acquire the data from the root node includes:
transmitting transmission path information required for directly or indirectly acquiring the data from the root node to other edge nodes except the root node, and indicating the other edge nodes except the root node to acquire the data according to the transmission paths;
The transmission path information is obtained by the following means, including:
the root node is used as a root node of a tree structure, the at least two edge nodes are arranged according to the tree structure, and transmission path information required by each edge node except the root node to acquire the data is determined according to the tree structure;
the transmitting transmission path information required for directly or indirectly acquiring the data from the root node to other edge nodes than the root node includes:
after detecting that the root node acquires the data, determining edge nodes positioned at a second layer in the tree structure, after detecting that the edge nodes positioned at the second layer in the tree structure acquire the data, determining edge nodes positioned at a third layer in the tree structure, and the like until detecting that the edge nodes at the bottommost layer in the tree structure acquire the data.
10. The method of claim 9, wherein the controlling the other edge nodes other than the root node to obtain the data from the root node comprises at least one of:
controlling the other edge nodes except the root node to directly acquire the data from the root node;
And after the data obtained by one edge node of the other edge nodes except the root node, controlling the other edge node to obtain the data from the edge node of the obtained data.
11. The method of claim 9, wherein the determining, according to the tree structure, transmission path information required for each edge node other than a root node to acquire the data includes:
and determining other edge nodes except the root node according to the tree structure, and taking the root node as a source to acquire the transmission path information of the data layer by layer.
12. The method of claim 9, wherein the controlling the edge nodes other than the root node to obtain the data from the root node comprises:
setting each edge node in the other edge nodes except the root node as a parent node or a leaf node, wherein:
the parent node obtains the data from a root node or one parent node and sends the data to a leaf node or the other parent node;
the leaf node obtains the data from a root node or a parent node.
13. The method of claim 12, wherein the root node is determined based on a resource headroom of the at least two edge nodes.
14. The method of claim 13, wherein the root node is obtained by:
acquiring the free quantity information of each resource of each edge node;
calculating the resource margin of each edge node according to the margin of each resource and the preset coefficient information of each resource;
and selecting an edge node with the resource vacancy meeting the judging condition of sufficient resource vacancy as a root node.
15. The method of claim 13, wherein the parent node and the leaf node are obtained by:
selecting a part of nodes from other edge nodes except the root node as parent nodes according to the resource empty quantity of the edge nodes, wherein:
the higher the resource margin of the node, the higher the probability of being selected as the parent node;
the lower the resource headroom of a node, the lower the probability of being selected as a parent node.
16. The method of claim 15, wherein the resource empty space of the edge nodes in the i-th layer is higher than the resource empty space of the edge nodes in the i+1-th layer in the tree structure, wherein i is a positive integer.
17. A computer storage medium having stored thereon a computer program which, when executed, implements the steps of the method according to any of claims 9-16.
18. A computer device comprising a processor, a memory and a computer program stored on the memory, the processor implementing the steps of the method according to any of claims 9-16 when the computer program is executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911037552.9A CN112738149B (en) | 2019-10-29 | 2019-10-29 | Data transmission system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911037552.9A CN112738149B (en) | 2019-10-29 | 2019-10-29 | Data transmission system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112738149A CN112738149A (en) | 2021-04-30 |
CN112738149B true CN112738149B (en) | 2023-04-25 |
Family
ID=75589036
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911037552.9A Active CN112738149B (en) | 2019-10-29 | 2019-10-29 | Data transmission system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112738149B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011141963A1 (en) * | 2010-05-13 | 2011-11-17 | Hitachi, Ltd. | Information processing apparatus and data transfer method |
CN102307216A (en) * | 2011-03-15 | 2012-01-04 | 陈建国 | Peer-to-peer (P2P) stream media broadcast method and system for multimedia telephone |
CN105933234A (en) * | 2016-04-20 | 2016-09-07 | 乐视控股(北京)有限公司 | Node management method and system in CDN network |
CN107277561A (en) * | 2016-04-08 | 2017-10-20 | 北京优朋普乐科技有限公司 | Content distributing network |
CN109600447A (en) * | 2018-12-21 | 2019-04-09 | 北京百度网讯科技有限公司 | For handling the methods, devices and systems of data |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101883039B (en) * | 2010-05-13 | 2012-02-01 | 北京航空航天大学 | Data transmission network of large-scale cluster system and its construction method |
CN104022911B (en) * | 2014-06-27 | 2018-03-30 | 哈尔滨工业大学 | A kind of contents construction management method of pattern of fusion content distributing network |
US10133763B2 (en) * | 2015-10-20 | 2018-11-20 | International Business Machines Corporation | Isolation of concurrent operations on tree-based data structures |
CN107105013B (en) * | 2017-03-28 | 2020-06-30 | 北京梆梆安全科技有限公司 | File processing method, server, terminal and system |
CN108933835A (en) * | 2018-07-23 | 2018-12-04 | 安徽广行领视通信科技有限公司 | A kind of CDN distribution method for saving bandwidth resources |
-
2019
- 2019-10-29 CN CN201911037552.9A patent/CN112738149B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011141963A1 (en) * | 2010-05-13 | 2011-11-17 | Hitachi, Ltd. | Information processing apparatus and data transfer method |
CN102307216A (en) * | 2011-03-15 | 2012-01-04 | 陈建国 | Peer-to-peer (P2P) stream media broadcast method and system for multimedia telephone |
CN107277561A (en) * | 2016-04-08 | 2017-10-20 | 北京优朋普乐科技有限公司 | Content distributing network |
CN105933234A (en) * | 2016-04-20 | 2016-09-07 | 乐视控股(北京)有限公司 | Node management method and system in CDN network |
CN109600447A (en) * | 2018-12-21 | 2019-04-09 | 北京百度网讯科技有限公司 | For handling the methods, devices and systems of data |
Non-Patent Citations (1)
Title |
---|
王煜坤 ; .基于CDN和P2P技术的流媒体系统设计.现代计算机(专业版).2009,(第03期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN112738149A (en) | 2021-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10831562B2 (en) | Method and system for operating a data center by reducing an amount of data to be processed | |
KR101885688B1 (en) | Data stream splitting for low-latency data access | |
CN109600447B (en) | Method, device and system for processing data | |
CN112817856B (en) | AB experiment integration method and system | |
CN111563125B (en) | Data storage system, data query method and device | |
CN112131504B (en) | Webpage editing and displaying method, device, equipment and storage medium | |
CN105930479A (en) | Data skew processing method and apparatus | |
JP2018515844A (en) | Data processing method and system | |
CN105550318A (en) | Spark big data processing platform based query method | |
WO2021115082A1 (en) | Job scheduling method and job scheduling apparatus | |
CN115801896B (en) | Computing power network node allocation method, device, electronic device and storage medium | |
CN114298431B (en) | A network path selection method, device, equipment and storage medium | |
CN106686112A (en) | Cloud file transmission system and method | |
Di Cicco et al. | Drl-forch: A scalable deep reinforcement learning-based fog computing orchestrator | |
CN104601486A (en) | Method and device for shunt of network flow | |
CN112751890B (en) | Data transmission control method and device | |
CN112738149B (en) | Data transmission system and method | |
CN108810089B (en) | Information push method, device and storage medium | |
CN101834906B (en) | Multiscale service unit selecting method for distributed task processing and collaboration | |
CN110958666A (en) | Network slice resource mapping method based on reinforcement learning | |
CN112423041B (en) | Video stream processing method and system based on QoS constraints under distributed computing platform | |
CN112988904B (en) | Distributed data management system and data storage method | |
US9361588B2 (en) | Construction of tree-shaped bayesian network | |
Li et al. | Leveraging joint allocation of multidimensional resources for distributed task assignment | |
Chen et al. | Reliable and efficient RAR-based distributed model training in computing power network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |