CN109194763A - Caching method based on small base station self-organizing cooperative in a kind of super-intensive network - Google Patents
Caching method based on small base station self-organizing cooperative in a kind of super-intensive network Download PDFInfo
- Publication number
- CN109194763A CN109194763A CN201811110511.3A CN201811110511A CN109194763A CN 109194763 A CN109194763 A CN 109194763A CN 201811110511 A CN201811110511 A CN 201811110511A CN 109194763 A CN109194763 A CN 109194763A
- Authority
- CN
- China
- Prior art keywords
- small base
- base station
- base stations
- user
- cluster
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 241000854291 Dianthus carthusianorum Species 0.000 claims abstract description 23
- 239000011159 matrix material Substances 0.000 claims abstract description 18
- 230000005540 biological transmission Effects 0.000 claims description 21
- 238000004364 calculation method Methods 0.000 claims description 11
- 239000012634 fragment Substances 0.000 claims description 8
- 230000001934 delay Effects 0.000 claims 1
- 238000004891 communication Methods 0.000 abstract description 6
- 229920000468 styrene butadiene styrene block copolymer Polymers 0.000 description 24
- 238000005457 optimization Methods 0.000 description 7
- 238000004088 simulation Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000007423 decrease Effects 0.000 description 5
- 238000001228 spectrum Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000011273 social behavior Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/06—Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W40/00—Communication routing or communication path finding
- H04W40/24—Connectivity information management, e.g. connectivity discovery or connectivity update
- H04W40/32—Connectivity information management, e.g. connectivity discovery or connectivity update for defining a routing cluster membership
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
本发明提出一种超密集网络中基于小型基站自组织协作的缓存方法,属于无线通信技术领域。包括:首先,根据超密集网络中小型基站的负载能力和位置,得到小型基站的相似度矩阵;然后,根据相似度矩阵和每簇中小型基站的个数k,进行分簇,选取负载能力最好的小型基站充当簇头;依据文件缓存策略将文件缓存到小型基站和宏基站中;用户接入基站后,请求获取文件;最后,判断k的取值是否超过簇内设定的最大值K,若是,输出使得用户的平均下载时延最小的k值;否则,继续更新k值,继续重新分簇。本发明通过自组织思想分配簇内资源,利用小型基站间的互相协作和自组织,有效地提高了缓存命中率同时减小了用户下载时延,满足多种业务需求。
The invention provides a caching method based on self-organization and cooperation of small base stations in an ultra-dense network, and belongs to the technical field of wireless communication. Including: first, according to the load capacity and location of the small base stations in the ultra-dense network, the similarity matrix of the small base stations is obtained; then, according to the similarity matrix and the number k of the small base stations in each cluster, clustering is performed, and the most load capacity is selected. A good small base station acts as the cluster head; files are cached in the small base station and macro base station according to the file caching strategy; after the user accesses the base station, the user requests to obtain the file; finally, it is judged whether the value of k exceeds the maximum value K set in the cluster , if so, output the k value that minimizes the average download delay of the user; otherwise, continue to update the k value and continue to re-cluster. The invention allocates the resources in the cluster through the self-organization idea, utilizes the mutual cooperation and self-organization among the small base stations, effectively improves the cache hit rate and reduces the user download time delay, and meets various business requirements.
Description
技术领域technical field
本发明属于无线通信技术领域,具体涉及一种超密集网络中基于小型基站自组织协作的缓存方法。The invention belongs to the technical field of wireless communication, and in particular relates to a caching method based on self-organized cooperation of small base stations in an ultra-dense network.
背景技术Background technique
超密集组网,即超密集网络UDN技术,通过密集部署小型基站SBS,可实现频率复用效率的巨大提升,是满足5G千倍容量增长需求的关键技术之一。超密集网络UDN被认为是用来满足用户需求,提高系统容量的关键技术之一。UDN通常包含了大量的低功率小型基站SBSs并且部署密度远远高于当前的移动网络场景,可以为用户提供极高的数据传输速率。同时,将缓存技术引入超密集网络,网络边缘缓存近几年来吸引了很多关注,其主要思想是将频繁被用户请求的内容缓存到网络边缘中,使得网络中的文件更贴近用户,可以减少冗余数据传输,缩短用户下载时延,提升用户体验的同时提高频谱利用率。UDN场景和网络边缘缓存技术相结合,既可以提高网络中的吞吐量,降低用户下载时延,又能减轻回程压力,将大大提升网络的性能。但是考虑到UDN中SBSs超密集部署情况下的成本问题,单个SBS的缓存容量十分有限,因此每一个SBS无法将用户可能需要的所有文件都缓存下来,在这种条件下需要结合多种因素考虑SBSs的缓存方式,比如要综合考虑到文件流行度以及命中率和用户的体验,还可以考虑小型基站间的互相协作和自组织。Ultra-dense networking, that is, ultra-dense network UDN technology, can achieve a huge improvement in frequency reuse efficiency by densely deploying small base station SBS, and is one of the key technologies to meet the thousand-fold increase in 5G capacity. Ultra-dense network UDN is considered to be one of the key technologies used to meet user needs and improve system capacity. UDN usually contains a large number of low-power small base station SBSs and the deployment density is much higher than the current mobile network scenario, which can provide users with extremely high data transmission rates. At the same time, caching technology has been introduced into ultra-dense networks, and network edge caching has attracted a lot of attention in recent years. Its main idea is to cache frequently requested content at the edge of the network, so that files in the network are closer to users, which can reduce redundancy. It can shorten the user's download delay, improve the user experience and improve the spectrum utilization. The combination of UDN scenarios and network edge caching technology can not only improve network throughput, reduce user download delay, but also reduce backhaul pressure, which will greatly improve network performance. However, considering the cost of ultra-dense deployment of SBSs in UDN, the cache capacity of a single SBS is very limited, so each SBS cannot cache all the files that users may need. Under this condition, various factors need to be considered. The caching method of SBSs, for example, should comprehensively consider the file popularity, hit rate and user experience, as well as mutual cooperation and self-organization between small base stations.
在针对超密集网络场景的缓存研究中,参考文献1考虑到缓存空间的限制想要最大化缓存利用率。首先将优化目标转换为回程负载最小化问题,然后使用机器学习相关算法预测并缓存相应内容到小型基站中,算法复杂度较低并且准确率很高。参考文献2提出了一种小型基站社交感知缓存策略来提升网络吞吐量和能量效率。首先利用社交网络理论研究小型基站的社交行为,从社交纽带因子高的基站中选出少数非常重要的基站,基于这些基站来与其他小小区调度分配资源。In the cache research for ultra-dense network scenarios, Reference 1 wants to maximize cache utilization considering the limitation of cache space. First, the optimization objective is transformed into a backhaul load minimization problem, and then machine learning related algorithms are used to predict and cache the corresponding content in the small base station, with low algorithm complexity and high accuracy. Reference 2 proposes a small base station social-aware caching strategy to improve network throughput and energy efficiency. First, the social network theory is used to study the social behavior of small base stations, and a few very important base stations are selected from base stations with high social bond factors, and resources are scheduled and allocated with other small cells based on these base stations.
在超密集的场景下,现有技术主要围绕单一基站来优化缓存内容,没有充分考虑到小型基站间的互相协作和自组织,如何在单个SBS的缓存容量十分有限的条件下,高效地为用户提供服务,减小用户下载时延,考虑这些问题可以进一步提高缓存命中率和用户的业务体验。In an ultra-dense scenario, the existing technology mainly focuses on a single base station to optimize the cache content, and does not fully consider the mutual cooperation and self-organization between small base stations. Provide services and reduce user download delay. Considering these problems can further improve the cache hit rate and user service experience.
参考文献:references:
[1]G.Shen,L.Pei,P.Zhiwen,L.Nan and Y.Xiaohu,"Machine learning basedsmall cell cache strategy for ultra dense networks,"2017 9th InternationalConference on Wireless Communications and Signal Processing(WCSP),Nanjing,2017,pp.1-6.[1]G.Shen,L.Pei,P.Zhiwen,L.Nan and Y.Xiaohu,"Machine learning basedsmall cell cache strategy for ultra dense networks,"2017 9th International Conference on Wireless Communications and Signal Processing(WCSP),Nanjing , 2017, pp.1-6.
[2]Y.Li,X.Zhang,J.Zhang,S.Wang and D.Wang,"Base Station Social-AwareCaching Strategy for 5G Ultra Dense Networks,"2017IEEE Globecom Workshops(GCWkshps),Singapore,2017,pp.1-6.[2] Y.Li,X.Zhang,J.Zhang,S.Wang and D.Wang,"Base Station Social-AwareCaching Strategy for 5G Ultra Dense Networks,"2017IEEE Globecom Workshops(GCWkshps),Singapore,2017,pp. 1-6.
发明内容SUMMARY OF THE INVENTION
针对于现有技术并没有综合考虑到文件流行度以及命中率和用户的体验,也没有充分考虑小型基站间的互相协作和自组织,本发明提出一种超密集网络中基于小型基站自组织协作的缓存方法,以实现减小用户下载时延的目的。In view of the fact that the prior art does not comprehensively consider file popularity, hit rate and user experience, and does not fully consider the mutual cooperation and self-organization between small base stations, the present invention proposes a self-organized cooperation based on small base stations in an ultra-dense network. The caching method to achieve the purpose of reducing the user's download delay.
本发明的超密集异构网络场景,包括:一个核心网,一个宏基站,在宏基站覆盖范围下密集分布的S个小型基站,在整个网络场景下随机分布的U个移动用户;宏基站与核心网、宏基站与小型基站通过无线链路连接。本发明提出的一种超密集网络中基于小型基站自组织协作的缓存方法,包括如下步骤:The ultra-dense heterogeneous network scenario of the present invention includes: a core network, a macro base station, S small base stations densely distributed under the coverage of the macro base station, and U mobile users randomly distributed in the entire network scenario; The core network, the macro base station and the small base station are connected by wireless links. A caching method based on self-organized cooperation of small base stations in an ultra-dense network proposed by the present invention includes the following steps:
步骤1,根据超密集网络中小型基站的负载能力和位置,得到小型基站的相似度矩阵;Step 1, according to the load capacity and location of the small base station in the ultra-dense network, obtain the similarity matrix of the small base station;
所述的相似度矩阵记录任意不同两个小型基站的相似度,小型基站S1和S2的相似度表示为:θ∈[0,1];其中,θ是设定的常数,是基站S1和S2的位置相似度,是基站S1和S2的负载能力差异度;The similarity matrix records the similarity between any two different small base stations, the similarity between the small base stations S 1 and S 2 Expressed as: θ∈[0,1]; where θ is a set constant, is the location similarity of base stations S1 and S2 , is the load capacity difference of base stations S1 and S2 ;
步骤2,根据小型基站的相似度矩阵和每簇中小型基站的个数k,将所有小型基站进行分簇,选取每簇中负载能力最好的小型基站充当簇头;k为正整数;Step 2, according to the similarity matrix of small base stations and the number k of small and medium base stations in each cluster, all small base stations are clustered, and the small base station with the best load capacity in each cluster is selected as the cluster head; k is a positive integer;
步骤3,依据文件缓存策略将文件缓存到小型基站和宏基站中;Step 3, cache the file in the small base station and the macro base station according to the file caching strategy;
所述的文件缓存策略为:设小型基站的缓存容量均相同,最多缓存M个完整文件;将用户所有可能请求的文件按照文件流行度进行排序;对一个簇,选取流行度靠前的kM个文件并将每个文件分割为k个片段分别缓存到对应的小型基站中;在宏基站中继续按照文件流行度缓存剩余的流行度靠前的文件;The file caching strategy is as follows: assuming that the cache capacities of the small base stations are all the same, and at most M complete files are cached; all the files that the user may request are sorted according to the file popularity; for a cluster, the kM files with the highest popularity are selected. file and divide each file into k segments and cache them in the corresponding small base stations; continue to cache the remaining files with the highest popularity in the macro base station according to the popularity of the files;
步骤4,用户接入基站后,开始请求获取文件,此时将出现三种情况:Step 4: After the user accesses the base station, the user starts to request to obtain files. At this time, three situations will occur:
首先判断用户所需文件是否缓存在小型基站内,如果是,出现第一种情况:小型基站内缓存了用户请求的文件;如果否,再判断用户所需文件是否缓存在宏基站内,如果是,出现第二种情况:用户请求的文件缓存在宏基站中;如果以上两种情况都不是,那么出现第三种情况:用户所请求的文件并没有缓存在网络中,用户通过核心网获取所请求的文件;First, judge whether the file required by the user is cached in the small base station, if so, the first situation occurs: the file requested by the user is cached in the small base station; if not, then judge whether the file required by the user is cached in the macro base station, if so, The second situation occurs: the file requested by the user is cached in the macro base station; if neither of the above two situations occurs, then the third situation occurs: the file requested by the user is not cached in the network, and the user obtains the requested file through the core network. document;
计算三种情况下的用户的传输时延,进而获得用户平均下载时延,然后执行步骤5;Calculate the user's transmission delay in the three cases, and then obtain the user's average download delay, and then perform step 5;
步骤5,设每簇内小型基站数量最多为K个,K为正整数;判断k的取值是否超过K,若是,则选取使得用户的平均下载时延最小的k值输出;否则,继续更新k值,使k自增1,然后转步骤2继续执行。Step 5, set the number of small base stations in each cluster to be K at most, and K is a positive integer; determine whether the value of k exceeds K, if so, select the value of k that minimizes the average download delay of the user and output; otherwise, continue to update k value, make k increment by 1, and then go to step 2 to continue.
本发明与现有技术相比,具有以下明显优势:Compared with the prior art, the present invention has the following obvious advantages:
(1)本方法提出了基于SBS分簇的协作缓存策略,根据仿真结果可以看出,该方法有效地提高了缓存命中率同时减小了用户下载时延,证实该策略在密集场景下满足多种业务需求的可行性和适用性。(1) This method proposes a cooperative caching strategy based on SBS clustering. According to the simulation results, it can be seen that this method can effectively improve the cache hit rate and reduce the user download delay. feasibility and applicability to the business needs.
(2)本方法可以利用小型基站的分簇与相互协作,利用网络自优化思想实现资源管理效率的提升。(2) The method can utilize the clustering and mutual cooperation of the small base stations, and utilize the network self-optimization idea to improve the resource management efficiency.
附图说明Description of drawings
图1是本发明中SBS自组织协作缓存方法流程图;Fig. 1 is the flow chart of SBS self-organized cooperative cache method in the present invention;
图2是本发明中SBS分簇的协作缓存方法系统模型图;Fig. 2 is the cooperative caching method system model diagram of SBS clustering in the present invention;
图3是本发明中SBS分簇的协作缓存策略示意图;3 is a schematic diagram of the cooperative cache strategy of SBS clustering in the present invention;
图4是本发明缓存命中率与簇内小型基站数量关系图;Fig. 4 is the relation diagram of cache hit rate of the present invention and the number of small base stations in the cluster;
图5是本发明中用户平均下载时延与簇内小型基站数量关系图;5 is a graph showing the relationship between the average download delay of users and the number of small base stations in the cluster in the present invention;
图6是本发明中用户平均下载时延与场景中小型基站总数量关系图;FIG. 6 is a graph showing the relationship between the average download delay of users and the total number of small and medium-sized base stations in the scene in the present invention;
图7是本发明中用户平均下载时延与齐普夫衰减常数α关系图。FIG. 7 is a graph showing the relationship between the average download delay of the user and the Zipf decay constant α in the present invention.
具体实施方式Detailed ways
为了便于本领域普通技术人员理解和实施本发明,下面结合附图和实施例对本发明作进一步的详细描述。In order to facilitate the understanding and implementation of the present invention by those of ordinary skill in the art, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments.
本发明针对如何在单个SBS的缓存容量十分有限的条件下,高效地为用户提供服务,减小用户下载时延,提出了一种超密集网络中基于SBS自组织协作的缓存方法。本发明方法基于SBSs的负载能力和位置信息将所有SBSs进行分簇,每簇中选出簇头利用自组织思想分配簇内资源。然后将相应文件分片并将不同碎片缓存到不同的SBSs中,每个基站缓存文件的大小相同并且按照文件流行度排序。接下来将优化目标设定为最小化用户平均下载时延,最终改变簇内小型基站数量,通过遍历法找到最优解。The invention proposes a caching method based on SBS self-organization cooperation in an ultra-dense network, aiming at how to efficiently provide services to users and reduce user download delay under the condition that the cache capacity of a single SBS is very limited. The method of the invention divides all SBSs into clusters based on the load capacity and location information of the SBSs, and selects a cluster head in each cluster to allocate resources in the cluster by using the self-organization idea. Then the corresponding files are fragmented and different fragments are cached in different SBSs, each base station caches files of the same size and sorted by file popularity. Next, the optimization goal is set to minimize the average download delay of users, and finally change the number of small base stations in the cluster, and find the optimal solution through the traversal method.
如图1所示,为本发明提出了一种基于SBS自组织的协作缓存方法,具体包括步骤1~步骤5,下面依次说明各步骤。As shown in FIG. 1 , a cooperative caching method based on SBS self-organization is proposed for the present invention, which specifically includes steps 1 to 5, and each step is described in sequence below.
步骤1,初始化参数,根据超密集网络中所有小型基站的位置信息与负载能力信息,得到小型基站的相似度矩阵。初始化的参数包括基站的位置信息和负载能力。Step 1: Initialize parameters, and obtain a similarity matrix of small base stations according to the location information and load capacity information of all small base stations in the ultra-dense network. The initialized parameters include the location information and load capacity of the base station.
如图2所示,为本发明SBS分簇的协作缓存方法系统模型,本发明考虑超密集异构网络场景下的缓存策略,在该网络中,包括:一个宏基站MBS与核心网,宏基站与核心网、宏基站与小型基站通过无线链路连接。有s个小型基站SBS在宏基站覆盖范围下密集分布,u个移动用户随机分布在整个网络场景下,本方法假设用户在接受文件时处于静止状态。网络中的s个小型基站用一个集合表示,其形式为,用集合表示网络中的u个移动用户,其形式为, As shown in FIG. 2, it is the system model of the cooperative caching method for SBS clustering according to the present invention. The present invention considers the caching strategy in the ultra-dense heterogeneous network scenario. The network includes: a macro base station MBS and a core network, and a macro base station. It is connected with the core network, macro base station and small base station through wireless links. There are s small base stations SBS densely distributed under the coverage of the macro base station, and u mobile users are randomly distributed in the entire network scenario. This method assumes that users are in a static state when receiving files. A set of s small base stations in the network means that it has the form, with collection represents u mobile users in the network in the form,
小型基站分簇的关键在于选取基站间的相似度特征,虽然有许多的相似度特征可供选择,但是其中较为重要的两项特征分别是小型基站的位置信息和负载能力[参考文献3:S.Samarakoon,M.Bennis,W.Saad,and M.Latva-aho,“Dynamic clustering and on/offstrategies for wireless small cell networks,”IEEE Transactions on WirelessCommunications,vol.15,no.3,pp.2164–2178,March 2016.]。位置信息反映了基站间相互协作的能力,决定了基站间协作的可操作性。负载能力引发了基站间相互干扰问题同时显示了基站参与协作的意愿度,决定了是否有必要进行协作。接下来,将分别计算出基站间位置信息和负载能力的相似度。The key to clustering small base stations is to select similarity features between base stations. Although there are many similarity features to choose from, two of the more important features are the location information and load capacity of small base stations [Reference 3: S . Samarakoon, M. Bennis, W. Saad, and M. Latva-aho, “Dynamic clustering and on/offstrategies for wireless small cell networks,” IEEE Transactions on Wireless Communications, vol. 15, no. 3, pp. 2164–2178 , March 2016.]. The location information reflects the ability of the base stations to cooperate with each other, and determines the operability of the cooperation between the base stations. The load capacity causes the problem of mutual interference between base stations and also shows the willingness of base stations to participate in cooperation, which determines whether it is necessary to cooperate. Next, the similarity of location information and load capacity between base stations will be calculated respectively.
一方面,从小型基站的位置信息计算超密集网络中任意两个不同小基站的位置相似度,对于任意两个不同的小基站,计算两个小基站S1和S2对应的高斯相似度,即位置相似度公式如下:On the one hand, the location similarity of any two different small base stations in the ultra-dense network is calculated from the location information of the small base stations, and for any two different small base stations, the Gaussian similarity corresponding to the two small base stations S 1 and S 2 is calculated, location similarity The formula is as follows:
其中,分别表示两个小型基站在欧式空间内的坐标;σX为设定常数;r表示小型基站互为邻居的最大间距。可以看出,小型基站间距离越小,位置相似度越高,基站更容易相互协作。更进一步,所有的s个小型基站彼此之间的位置相似度可以用一个S×S的矩阵来表示。in, respectively represent the coordinates of the two small base stations in the Euclidean space; σ X is a set constant; r represents the maximum distance between the small base stations as neighbors. It can be seen that the smaller the distance between small base stations, the higher the location similarity, and the easier the base stations are to cooperate with each other. Furthermore, the location similarity between all s small base stations can be represented by an S×S matrix.
另一方面,从小型基站的负载能力方面来计算相似度。由于小型基站的负载能力差别越大,越应该组成一簇来进行簇间任务卸载,因此本发明方法将计算小型基站间负载差异度。为了简便计算,本发明方法假设每个用户请求文件的频率一样。所以小型基站的负载能力可以简化为最大可连接用户数量。最终得到小型基站S1和S2之间的负载能力差异度计算公式如下:On the other hand, the similarity is calculated from the load capacity of the small base station. Since the larger the load capacity difference of the small base stations is, the more a cluster should be formed to perform inter-cluster task offloading, so the method of the present invention will calculate the load difference degree between the small base stations. For the convenience of calculation, the method of the present invention assumes that each user requests files at the same frequency. Therefore, the load capacity of the small base station can be simplified to the maximum number of connectable users. Finally, the load capacity difference between the small base stations S 1 and S 2 is obtained Calculated as follows:
其中,σN是设定的常数,用来调整的大小,以方便数值计算;σN决定了差异度数值范围的大小。分别表示小型基站S1和S2的最大可连接用户数。与上述的一样,也将组成一个S×S的矩阵,其值越大,对应的两个小型基站越容易成簇。Among them, σ N is a set constant, used to adjust , in order to facilitate numerical calculation; σ N determines the size of the numerical range of the degree of difference. respectively represent the maximum number of connectable users of the small base stations S1 and S2. with the above Same, It will also form an S×S matrix. The larger the value is, the easier it is for the two corresponding small base stations to form clusters.
然后,根据小型基站S1和S2之间的位置相似度和负载能力差异度,得到最终用来决定分簇的相似度,小型基站S1和S2的相似度计算公式如下:Then, according to the location similarity and load capacity difference between the small base stations S 1 and S 2 , the similarity used to decide the clustering finally, the similarity between the small base stations S 1 and S 2 is obtained. Calculated as follows:
其中,θ是设定的常数,用来调整决定分簇的因素影响占比;θ的大小决定了位置信息相似度和负载能力差异度对整体相似度的影响程度。计算所有任意两个不同小型基站的相似度形成大小为S×S的相似度矩阵以此基于SBS的负载能力相似度和位置信息相似度在下一步进行分簇。Among them, θ is a set constant, which is used to adjust the proportion of factors that determine clustering; the size of θ determines the degree of influence of the similarity of location information and the difference of load capacity on the overall similarity. Calculate the similarity of all any two different small base stations to form a similarity matrix of size S × S In this way, clustering is performed in the next step based on the similarity of the load capacity and the similarity of the location information of the SBS.
步骤2,根据计算获得的相似度矩阵,依据小型基站分簇策略对小型基站进行分簇,确定每簇中小型基站的个数为k。可以将网络场景下的所有小型基站均分成若干个簇内小型基站数量相同的簇。在分好的各簇中选取每簇中负载能力最好的小型基站充当簇头。Step 2, according to the similarity matrix obtained by calculation, cluster the small base stations according to the small base station clustering strategy, and determine that the number of small base stations in each cluster is k. All small base stations in the network scenario can be divided into several clusters with the same number of small base stations in the cluster. In each cluster, the small base station with the best load capacity in each cluster is selected as the cluster head.
本步骤在确定每簇中的小型基站的个数k后,随机选取场景中的一个小型基站作为簇的中心点,根据相似度矩阵得到与其相似度最高的k-1个小型基站组成一簇。选择未成簇中离当前簇中心点最近的小型基站作为下一组分簇的中心点,再依据小型基站的相似度矩阵,选取与簇中心点的相似度最高的k-1个小型基站组成一簇。如此重复分簇操作。随着分簇的进行,如果出现符合条件的小型基站不足k-1个,此时选取已成簇的小型基站中相似度较高的几个将新簇中的基站补齐至k个。以此类推,直至所有的小型基站完成分簇,每簇中的簇头由负载能力最好的基站担任,簇头利用自优化思想来分配簇内的资源,以及协调分配簇内任务。In this step, after determining the number k of small base stations in each cluster, randomly select a small base station in the scene as the center point of the cluster, according to the similarity matrix Obtain k-1 small base stations with the highest similarity to form a cluster. Select the unclustered small base station closest to the center point of the current cluster as the center point of the next group of clusters, and then select k-1 small base stations with the highest similarity to the cluster center point according to the similarity matrix of the small base station to form a cluster. cluster. The clustering operation is thus repeated. With the progress of clustering, if there are less than k-1 small base stations that meet the conditions, at this time, several small base stations with high similarity in the cluster are selected to fill up the base stations in the new cluster to k. And so on, until all the small base stations complete the clustering, the cluster head in each cluster is served by the base station with the best load capacity, and the cluster head uses the self-optimization idea to allocate the resources in the cluster and coordinate the assignment of tasks in the cluster.
步骤3,依据文件缓存策略将文件缓存到对应的小型基站。由于小型基站缓存容量十分有限,为了提高缓存命中率,本方法令同一簇的不同小型基站缓存流行度高的文件的不同碎片,宏基站缓存网络中剩余的未缓存的完整文件。Step 3: Cache the file to the corresponding small base station according to the file cache policy. Since the cache capacity of the small base station is very limited, in order to improve the cache hit rate, this method enables different small base stations in the same cluster to cache different fragments of files with high popularity, and the macro base station caches the remaining uncached complete files in the network.
文件缓存策略为小型基站统计用户所有可能请求的文件种类数I,并按照文件流行度将所有文件排序,序号越小所代表文件的流行度越高。然后按照文件缓存策略对文件流行度靠前的那些文件进行分片并将对应碎片缓存至小型基站中;宏基站中将缓存一些流行度靠前的完整文件,依然按照文件流行度排序来缓存文件,先缓存流行度靠前的文件,直至缓存空间用尽。The file caching strategy is that the small base station counts the number of file types I that users may request, and sorts all files according to the file popularity. The smaller the serial number, the higher the popularity of the file. Then, according to the file caching strategy, the files with the highest file popularity will be fragmented and the corresponding fragments will be cached in the small base station; some complete files with the highest popularity will be cached in the macro base station, and the files will still be cached according to the file popularity. , cache the most popular files first, until the cache space is exhausted.
本发明方法假设所有的小型基站的缓存容量均相同,最多缓存M个完整文件,宏基站可以缓存N个完整文件,而网络中的用户需要的文件共有I种,且M<N<I。将网络中所有的I种文件按照文件流行度进行排序,得到流行度文件库文件下标值越小说明其对应的文件越流行。由于一个簇内有k个小型基站,因此每簇最多可以缓存kM个完整文件。如图3所示,本发明中SBS分簇的协作缓存策略,选取流行度靠前的k×M个文件,并将每种文件均匀分割成k个片段,然后将这些文件碎片缓存到不同的小型基站中。文件流行度处于k×M和k×M+N之间的文件将被完整的缓存在宏基站中。The method of the present invention assumes that all small base stations have the same cache capacity, at most M complete files are cached, and the macro base station can cache N complete files. Sort all the files in the network according to the popularity of the files, and obtain the popularity file library The smaller the file subscript value, the more popular the corresponding file is. Since there are k small base stations in a cluster, each cluster can cache up to kM complete files. As shown in FIG. 3 , the cooperative caching strategy of SBS clustering in the present invention selects k×M files with the highest popularity, and divides each file into k segments evenly, and then caches these file segments in different in small base stations. The files whose popularity is between k×M and k×M+N will be completely cached in the macro base station.
根据上述的缓存策略,可以对每簇中的k个小型基站缓存的文件进行数学建模。用f1,k表示流行度第一的文件的第k个文件碎片,则对于第k个小型基站,其所缓存的所有文件Ck表示为:According to the above caching strategy, the files cached by k small base stations in each cluster can be mathematically modeled. Let f 1,k denote the k th file fragment of the file with the first popularity, then for the k th small base station, all files C k cached by it are expressed as:
Ck={f1,k,f2,k,…,fk(M-1)+1,k…,fkM,k}C k ={f 1,k ,f 2,k ,...,f k(M-1)+1,k ...,f kM,k }
其中,fkM,k表示流行度第kM的文件的第k个文件碎片。Among them, f kM,k represents the kth file fragment of the file with the kMth popularity.
步骤4,用户接入基站后,开始请求获取文件,传输策略与缓存策略相互依托,分为三种情况,计算每种情况下的传输时延,进而得到用户平均下载时延,然后执行步骤5。Step 4: After the user accesses the base station, it starts to request to obtain files. The transmission strategy and the caching strategy rely on each other, and are divided into three cases. The transmission delay in each case is calculated, and the average download delay of the user is obtained, and then step 5 is executed. .
当用户想要从网络中获取某一文件时,涉及到文件传输问题时,此时会出现三种情况,用户将通过三种形式来获取其想要的文件。When a user wants to obtain a certain file from the network, when the file transfer problem is involved, three situations will occur at this time, and the user will obtain the desired file through three forms.
第一种情况是小型基站内缓存了用户请求的文件,用户所请求的文件通过小型基站传输。The first situation is that the file requested by the user is cached in the small base station, and the file requested by the user is transmitted through the small base station.
用户首先向其所连接的小型基站发送索要该文件请求,该基站将请求信息反馈给簇头。簇头根据请求文件的流行度判断簇内是否有用户所需的文件,如果有,则查看当前簇内各个小基站目前的负载情况来确定哪些小型基站共同为此用户服务,同时分配相应资源,如果某些缓存了用户所需文件碎片的小型基站无法直接与用户进行通信,簇头还将确认通过哪个基站进行中继传输。本传输过程通过自组织思想实现了小型基站与用户的文件传输。The user first sends a request for the file to the small base station to which it is connected, and the base station feeds back the request information to the cluster head. The cluster head judges whether there is a file required by the user in the cluster according to the popularity of the requested file. If so, it checks the current load of each small base station in the current cluster to determine which small base stations jointly serve the user, and allocates corresponding resources at the same time. If some small base stations that cache file fragments required by users cannot communicate directly with users, the cluster head will also confirm which base station to relay transmission through. This transmission process realizes the file transmission between the small base station and the user through the self-organization idea.
第二种情况是用户请求的文件缓存在宏基站中,根据MBS传输计算下载时延。如果簇头发现簇内没有用户所需要的文件,那么簇头将会向宏基站索取此文件。宏基站中如果缓存了相应文件,则此文件先经由宏基站传输给簇头,再由簇头传送至用户。In the second case, the file requested by the user is cached in the macro base station, and the download delay is calculated according to the MBS transmission. If the cluster head finds that there is no file required by the user in the cluster, the cluster head will request the file from the macro base station. If the corresponding file is cached in the macro base station, the file is first transmitted to the cluster head via the macro base station, and then transmitted to the user by the cluster head.
第三种情况是用户所请求的文件并没有缓存在网络中,文件需要通过核心网传输,根据核心网传输计算下载时延。如果宏基站中仍没有此文件,则宏基站将继续向核心网发送请求,核心网通过无线链路将用户所需文件传至宏基站,宏基站又通过无线链路将文件传送给簇头,再由簇头传至该用户。这种情况下文件下载时延会由于传输路径变长而变大。The third situation is that the file requested by the user is not cached in the network, the file needs to be transmitted through the core network, and the download delay is calculated according to the core network transmission. If there is still no such file in the macro base station, the macro base station will continue to send requests to the core network. The core network transmits the file required by the user to the macro base station through the wireless link, and the macro base station transmits the file to the cluster head through the wireless link. It is then transmitted to the user by the cluster head. In this case, the file download delay will increase due to the longer transmission path.
每一个文件的流行度不同,用户向小型基站索取对应文件的概率就不同。本方法假设用户请求不同文件的概率服从齐普夫定律[参考文献4:L.Breslau,P.Cao,L.Fan,G.Phillips,and S.Shenker,“Web caching and zipf-like distributions:evidenceand implications,”in INFOCOM’99.Eighteenth Annual Joint Conference of theIEEE Computer and Communications Societies.Proceedings.IEEE,vol.1,Mar 1999,pp.126–134vol.1.]。用Pi表示用户向小型基站请求流行度排序为i的文件的概率,根据齐普夫定律,得到Pi为:The popularity of each file is different, and the probability of the user requesting the corresponding file from the small base station is different. This method assumes that the probability of users requesting different files obeys Zipf's law [Reference 4: L. Breslau, P. Cao, L. Fan, G. Phillips, and S. Shenker, "Web caching and zipf-like distributions: evidence and Implications,” in INFOCOM’99. Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE, vol.1, Mar 1999, pp.126–134vol.1.]. P i is used to represent the probability that a user requests a file whose popularity is ranked i from the small base station. According to Zipf's law, P i is obtained as:
其中,α表示衰减常数;j表示文件种类;i表示流行度排序;根据其计算出每个文件被请求的概率之后便可计算出理论上小型基站对于用户的缓存命中率 Among them, α represents the decay constant; j represents the file type; i represents the popularity ranking; after calculating the probability of each file being requested, the theoretical cache hit rate of the small base station for the user can be calculated
为了便于计算,当小型基站向用户发送文件时,本发明方法假设小型基站不与宏基站共用频谱资源。并且同一个簇内的不同小型基站会被分配彼此正交的频带用于传送文件,避免了小型基站之间的互相干扰。设小型基站和宏基站的可用带宽分别为ωS,ωM;hs,u为第s个小型基站与第u个用户之间的信道增益,hM,s为宏基站与第s个小型基站之间的信道增益。分别用和计算对应的信道增益,其中,L0表示增益系数,是一个常量,β是路径损耗指示因子,ds,u表示第s个小型基站与第u个用户之间的距离,dM,s表示宏基站与第s个小型基站之间的距离。In order to facilitate the calculation, when the small base station sends the file to the user, the method of the present invention assumes that the small base station does not share spectrum resources with the macro base station. In addition, different small base stations in the same cluster will be allocated mutually orthogonal frequency bands for transmitting files, avoiding mutual interference between small base stations. Let the available bandwidths of the small base station and the macro base station be ω S and ω M respectively; h s,u is the channel gain between the sth small base station and the uth user, and h M,s is the macro base station and the sth small base station. Channel gain between base stations. use separately and Calculate the corresponding channel gain, where L 0 represents the gain coefficient, which is a constant, β is the path loss indicator factor, d s, u represents the distance between the s-th small base station and the u-th user, and d M,s represents the The distance between the macro base station and the sth small base station.
通过以下公式计算出第s个小型基站与第u个用户之间的信噪比SNRs,u和宏基站与第s个小型基站之间的信噪比SNRM,s。The signal-to-noise ratio SNR s,u between the s-th small base station and the u-th user and the signal-to-noise ratio SNR M,s between the macro base station and the s-th small base station are calculated by the following formulas.
SNRs,u=PS·hs,u/σ2 SNR s,u =P S ·h s,u /σ 2
SNRM,s=PM·hM,s/σ2 SNR M,s =P M ·h M,s /σ 2
其中,σ2表示网络场景中噪声功率;PM设定为宏基站与单个小型基站通信时的发射功率;PS设定为小型基站与单个用户通信时的发射功率。Among them, σ 2 represents the noise power in the network scenario; PM is set as the transmit power when the macro base station communicates with a single small base station; P S is set as the transmit power when the small base station communicates with a single user.
对于第一种情况,簇头利用自优化的思想分配簇内资源,控制簇内直接与用户相连的所有小型基站共同协作传输该文件,从而缩短下载时延。但是还可能存在簇内小型基站需要通过中继才能为用户传输文件的情况,所以下载时延还与簇内与用户直连的基站个数相关。For the first case, the cluster head uses the self-optimization idea to allocate resources in the cluster, and controls all the small base stations in the cluster directly connected to the user to cooperate to transmit the file, thereby shortening the download delay. However, there may also be cases where the small base stations in the cluster need to transmit files for users through relays, so the download delay is also related to the number of base stations in the cluster that are directly connected to the user.
第一种情况下,以香农公式作为基础,得到第u个用户通过小型基站获取文件时的传输时延为:In the first case, based on Shannon's formula, the transmission delay of the u-th user when the file is obtained through the small base station is obtained for:
其中,Nums表示第s个小基站需要为用户传输的文件碎片的个数;F为所需文件的大小;t表示直接与用户进行通信的小型基站数量。Among them, Num s represents the number of file fragments that the sth small base station needs to transmit for the user; F is the size of the required file; t represents the number of small base stations that communicate directly with the user.
第二种情况下,第u个用户通过宏基站获取文件时的传输时延包含了宏基站向簇头传输文件和簇头向用户传输文件两部分:In the second case, the transmission delay when the uth user obtains the file through the macro base station It includes two parts: the macro base station transmits files to the cluster head and the cluster head transmits files to the user:
第三种情况下,第u个用户通过核心网获取文件。第u个用户通过核心网获取文件时的传输时延除了包含之外,还包括了核心网与宏基站通信的时间,为了简化计算,本发明方法将此部分时延用常数Const来表示。因此的计算如下:In the third case, the uth user obtains the file through the core network. Transmission delay when the uth user obtains files through the core network In addition to containing In addition, the communication time between the core network and the macro base station is also included. In order to simplify the calculation, the method of the present invention uses the constant Const to represent this part of the time delay. therefore is calculated as follows:
设第u个用户从小型基站、宏基站和核心网获取文件的概率分别表示为如下:Let the probability of the u-th user obtaining files from the small base station, the macro base station and the core network be expressed as as follows:
所以计算的第u个用户的平均下载时延Delayu如下:Therefore, the calculated average download delay Delay u of the u-th user is as follows:
步骤5,给定每簇内最多可存在的小型基站数量为K,本发明方法的优化目标是在满足一定缓存命中率的条件下最小化用户平均下载时延,优化目标的计算公式如下:Step 5, given that the maximum number of small base stations that can exist in each cluster is K, the optimization goal of the method of the present invention is to minimize the average download delay of users under the condition of satisfying a certain cache hit rate. The calculation formula of the optimization goal is as follows:
subject to:k∈{1,2,…,K}subject to: k∈{1,2,…,K}
改变k的大小,将调整分簇后每簇中小型基站的个数,每次改变k之后,需要按照k的值执行步骤2,重新分簇,再继续执行步骤3和4,求取对应的小型基站缓存命中率和用户平均下载时延。通过遍历k的取值,获得使得用户的平均下载时延最小的k值。Changing the size of k will adjust the number of small and medium-sized base stations in each cluster after clustering. After each change of k, you need to perform step 2 according to the value of k, re-cluster, and then continue to perform steps 3 and 4 to obtain the corresponding Small base station cache hit rate and user average download delay. By traversing the value of k, the value of k that minimizes the average download delay of the user is obtained.
在步骤5中,首先判断k的取值是否超过K,如果是,输出使得用户的平均下载时延最小的k值,结束本发明方法;如果否,继续改变k值,执行步骤二。本发明将k的取值从小到大一直取到K进行遍历。In step 5, first determine whether the value of k exceeds K, if so, output the value of k that minimizes the average download delay of the user, and end the method of the present invention; if not, continue to change the value of k, and execute step 2. The present invention traverses the value of k from small to large until it reaches K.
下面对本发明方法进行仿真验证,本次仿真的对比算法是“最流行”文件缓存方案[参考文献5:T.Wu,X.Li,H.Ji,and H.Zhang,“An energy-efficient sleep managementalgorithm for udn with edge caching,”in 2017IEEE Globecom Workshops(GCWkshps),Dec 2017,pp.1–5.]。在“最流行”文件缓存方案中,所有的小型基站都缓存相同的流行度最高的M个完整的文件。The method of the present invention is simulated and verified as follows. The comparison algorithm of this simulation is the "most popular" file caching scheme [Reference 5: T.Wu, X.Li, H.Ji, and H.Zhang, "An energy-efficient sleep management algorithm for udn with edge caching,” in 2017 IEEE Globecom Workshops (GCWkshps), Dec 2017, pp.1–5.]. In the "most popular" file caching scheme, all small cells cache the same M complete files with the highest popularity.
仿真验证如下:The simulation verification is as follows:
如果不做特殊说明,用户可能请求的文件种类数共为I=200,请求文件的大小为F=10Mbits,将齐普夫定律中的衰减常数α设定为α=0.8,小型基站的存储容量为M=10个文件,宏基站的存储容量为N=100个文件。仿真场景大小为200m*200m并且将宏基站放在中心。网络中小型基站数量为50且覆盖半径为r=30m,宏基站和小型基站的子信道可用带宽ωM=ωS=1MHz,发射功率分别为PS=100mW、PM=20W,噪声功率σ2=-100dBm,增益系数L0=-30dB。为了简化计算,本方法将核心网到宏基站的传输时延设为固定值C=1s。If no special instructions are given, the total number of file types the user may request is I=200, the size of the requested file is F=10Mbits, the attenuation constant α in Zipf’s law is set to α=0.8, the storage capacity of the small base station is For M=10 files, the storage capacity of the macro base station is N=100 files. The size of the simulation scene is 200m*200m and the macro base station is placed in the center. The number of small base stations in the network is 50 and the coverage radius is r=30m, the sub-channel available bandwidth of macro base station and small base station is ω M =ω S =1MHz, the transmit power is P S =100mW, P M =20W, noise power σ 2 =-100dBm, gain coefficient L 0 =-30dB. In order to simplify the calculation, in this method, the transmission delay from the core network to the macro base station is set as a fixed value C=1s.
如图4所示,为本发明方法缓存命中率与簇内小型基站数量关系图,表明了缓存平均命中率和簇内小型基站数量k之间的关系,横轴表示基站数量,纵轴表示缓存平均命中率,不同曲线代表不同的文件种数,从图4中选取任意曲线,可以看出小型基站缓存命中率随着簇内小型基站数量的增加而增加,并且增加的趋势越来越慢。因为簇内小型基站数量越多,意味着簇内缓存的不同文件的种类越多,用户请求的文件可以直接从小型基站获取的概率就越大,也就是说缓存命中率越高。根据齐普夫定律可知,流行度越靠后的文件被用户请求的概率就越小,而本方法的缓存策略就是按照文件流行度排序依次缓存的,因此后面被缓存的文件对缓存命中率的贡献不大,所以会出现增加趋势越来越慢的情况。同时,改变网络中用户请求的文件种数I也会影响到小型基站缓存命中率。网络中文件种数越多,意味着用户请求单个文件所占的概率比例就越小,从而导致簇内缓存的文件被用户请求的概率变小。因此从图4中可以看到,文件种数I=200、I=240、I=280、I=320时,随着文件种数I越小,小型基站的缓存命中率反而越大。As shown in Fig. 4, it is a graph showing the relationship between the cache hit rate and the number of small base stations in the cluster according to the method of the present invention, which shows the relationship between the average cache hit rate and the number k of small base stations in the cluster. The horizontal axis represents the number of base stations, and the vertical axis represents the cache. The average hit rate, different curves represent different file types, and any curve is selected from Figure 4, it can be seen that the small base station cache hit rate increases with the increase of the number of small base stations in the cluster, and the increasing trend is getting slower and slower. Because the more small base stations in the cluster, the more different types of files are cached in the cluster, and the higher the probability that the files requested by the user can be obtained directly from the small base stations, that is to say, the higher the cache hit rate is. According to Zipf's law, the probability of a file with a lower popularity being requested by a user is smaller, and the caching strategy of this method is to cache files in sequence according to their popularity. The contribution is not large, so there will be a situation where the increasing trend is getting slower and slower. At the same time, changing the number of file types I requested by users in the network will also affect the cache hit rate of the small base station. The larger the number of files in the network, the smaller the probability of a user requesting a single file, which leads to a smaller probability that the file cached in the cluster is requested by the user. Therefore, it can be seen from FIG. 4 that when the number of file types I=200, I=240, I=280, and I=320, as the number of file types I is smaller, the cache hit rate of the small base station increases instead.
单从缓存命中率来看,好像簇内小型基站数量越多越好。但是,随着簇内小型基站数量的增加,簇内的资源分配问题和簇头的计算能力都将受到考验,并因此可能会增加用户的平均下载时延。所以接下来将在缓存命中率和用户平均下载时延之间作出权衡。From the cache hit rate alone, it seems that the more small base stations in the cluster, the better. However, with the increase of the number of small base stations in the cluster, the problem of resource allocation in the cluster and the computing power of the cluster head will be tested, and thus the average download delay of users may increase. Therefore, the next step will be a trade-off between the cache hit rate and the average download delay of users.
如图5所示,为本发明方法中用户平均下载时延与簇内小型基站数量关系图,表明了用户平均下载时延和簇内小型基站数量k之间的关系。这里要指出当簇内小型基站数量为1时,所有文件不需要分片,基站内缓存的都是完整的文件,此时与“最流行”缓存方案一致。从图5中可以看出,簇内小型基站数量从1增加到6的过程中,用户平均下载时延先减少然后再增加。用户平均下载时延减少的原因是用户从小型基站获取文件的速度要比从宏基站或者核心网获取文件的速度更快。之后时延又开始增加是因为随着簇内基站数量的增加,簇内小型基站之间的协作变的复杂了,有可能需要更多的中继,而且簇内的频谱资源也是固定的,从而导致用户从小型基站获取完整文件的速度变慢。除此之外,当小型基站总数量为50个时,可以从图5中对比“最流行”缓存方案,k=1时的两个点代表“最流行”缓存方案和本方法所提的基于SBS分簇的协作缓存机制(k=4),两者的用户平均下载时延基本没有什么差异,但是缓存命中率却从0.35上升到0.6,从图4中I=200对应的曲线可以看到,此时参数设置与图5一致。这证实了本发明方法所提的缓存机制的确可以提升UDN场景下的缓存命中率,同时将大大减少回程网络的压力。As shown in FIG. 5 , it is a graph of the relationship between the average download delay of users and the number of small base stations in the cluster in the method of the present invention, which shows the relationship between the average download delay of users and the number k of small base stations in the cluster. It should be pointed out here that when the number of small base stations in the cluster is 1, all files do not need to be fragmented, and all files are cached in the base station, which is consistent with the "most popular" caching scheme. It can be seen from Figure 5 that in the process of increasing the number of small base stations in the cluster from 1 to 6, the average download delay of users first decreases and then increases. The reason for the reduction in the average download delay of users is that users can obtain files faster from small base stations than from macro base stations or the core network. After that, the delay began to increase again because with the increase of the number of base stations in the cluster, the cooperation between the small base stations in the cluster became more complicated, more relays may be required, and the spectrum resources in the cluster were also fixed, so This results in slower user access to complete files from small cells. In addition, when the total number of small base stations is 50, the "most popular" caching scheme can be compared from Figure 5, the two points when k=1 represent the "most popular" caching scheme and the proposed method based on The cooperative caching mechanism of SBS clustering (k=4), the average download delay of the two users is basically the same, but the cache hit rate has increased from 0.35 to 0.6, as can be seen from the curve corresponding to I=200 in Figure 4 , the parameter settings are consistent with Figure 5. This proves that the cache mechanism proposed by the method of the present invention can indeed improve the cache hit rate in the UDN scenario, and at the same time will greatly reduce the pressure on the backhaul network.
如图6所示,为本发明中用户平均下载时延与场景中小型基站总数量关系,表明了用户平均下载时延和场景中小型基站总数量之间的关系。在仿真过程中,簇内小型基站的数量为4。网络中小型基站的总个数从50个开始,每次增加10个,最终增加到100个。从图6中可以看出,在本发明所提的基于SBS分簇的协作缓存机制下,用户平均下载时延随着场景中小型基站总数量的增加而减小,而“最流行”文件缓存方案下的用户平均下载时延基本上没有什么变化。这是因为在仿真时为了简化计算,本发明方法假设用户与簇头基站间的距离是固定的,所以小型基站总数量的变化并不影响“最流行”缓存方案下用户与基站之间的距离,因此用户平均下载时延不怎么变化。但是在基于SBS分簇的协作缓存机制下,用户与除簇头之外其他基站之间的距离随着场景中小型基站总数量的增加而减小,因此用户平均下载时延也随之减小。As shown in FIG. 6 , it is the relationship between the average download delay of users and the total number of small base stations in the scene in the present invention, which shows the relationship between the average download delay of users and the total number of small base stations in the scene. In the simulation process, the number of small base stations in the cluster is 4. The total number of small base stations in the network starts from 50, increases by 10 each time, and finally increases to 100. It can be seen from Fig. 6 that under the cooperative caching mechanism based on SBS clustering proposed in the present invention, the average download delay of users decreases with the increase of the total number of small base stations in the scene, while the "most popular" file caches There is basically no change in the average download delay of users under the scheme. This is because in order to simplify the calculation during the simulation, the method of the present invention assumes that the distance between the user and the cluster head base station is fixed, so the change of the total number of small base stations does not affect the distance between the user and the base station under the "most popular" cache scheme , so the average download delay of users does not change much. However, under the cooperative caching mechanism based on SBS clustering, the distance between users and other base stations except the cluster head decreases with the increase of the total number of small base stations in the scene, so the average download delay of users also decreases. .
如图7所示,为本发明中用户平均下载时延与齐普夫衰减常数α关系,表明了用户平均下载时延和齐普夫衰减常数α之间的关系。在本图的仿真过程中,簇内小型基站的数量为4,网络中小型基站的个数为50,齐普夫衰减常数α从0.6增加到2.0。在图7中,随着齐普夫衰减常数α的增加,用户平均下载时延不断减小,同时减小趋势也越来越小。这是因为高的α意味着流行度靠前的文件被请求的概率将变得更大,而这些文件往往被缓存在小型基站中,同时一般来说从小型基站获取文件的时延要低于从宏基站或者从核心网获取文件的时延,所以用户平均下载时延将随之减小。此外,考虑到簇内所有的小型基站共同分配簇内的频谱资源,而“最流行”缓存方案中每个小型基站都有自己对应的频谱资源,因此使用“最流行”缓存方案时,小型基站传输所需的时延更小。但是用户平均下载时延还与缓存命中率相关,在α<0.9时,使用基于SBS分簇的协作缓存策略使得用户从小型基站获取文件的概率更大,此时其平均用户下载时延要优于“最流行”缓存方案。随着α的进一步增大,两种方案在缓存命中率上的差距越来越小,从而导致“最流行”缓存方案的时延变化相对较大,优于本发明方法提出的策略。一般情况下,α的值取0.6~0.8。As shown in FIG. 7 , it is the relationship between the user average download delay and the Zipf decay constant α in the present invention, which shows the relationship between the user average download delay and the Zipf decay constant α. In the simulation process of this figure, the number of small base stations in the cluster is 4, the number of small base stations in the network is 50, and the Zipf attenuation constant α is increased from 0.6 to 2.0. In Figure 7, with the increase of the Zipf decay constant α, the average download delay of users decreases continuously, and the decreasing trend is also getting smaller and smaller. This is because a high α means that the most popular files are more likely to be requested, and these files are often cached in small cells, and the latency of obtaining files from small cells is generally lower than The delay of obtaining files from the macro base station or from the core network, so the average download delay of users will be reduced accordingly. In addition, considering that all the small base stations in the cluster jointly allocate the spectrum resources in the cluster, and each small base station in the "most popular" caching scheme has its own corresponding spectrum resources, when the "most popular" caching scheme is used, the small base station The delay required for transmission is smaller. However, the average user download delay is also related to the cache hit rate. When α < 0.9, using the cooperative caching strategy based on SBS clustering makes the user more likely to obtain files from the small base station, and the average user download delay is better at this time. For the "most popular" caching scheme. With the further increase of α, the difference between the two schemes in the cache hit rate becomes smaller and smaller, resulting in a relatively large variation of the delay of the "most popular" cache scheme, which is better than the strategy proposed by the method of the present invention. In general, the value of α takes 0.6 to 0.8.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811110511.3A CN109194763B (en) | 2018-09-21 | 2018-09-21 | A caching method based on self-organized cooperation of small base stations in ultra-dense networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811110511.3A CN109194763B (en) | 2018-09-21 | 2018-09-21 | A caching method based on self-organized cooperation of small base stations in ultra-dense networks |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109194763A true CN109194763A (en) | 2019-01-11 |
CN109194763B CN109194763B (en) | 2020-05-26 |
Family
ID=64909291
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811110511.3A Active CN109194763B (en) | 2018-09-21 | 2018-09-21 | A caching method based on self-organized cooperation of small base stations in ultra-dense networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109194763B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109600780A (en) * | 2019-02-18 | 2019-04-09 | 南京邮电大学 | A kind of duplicate of the document caching method based on base station sub-clustering |
CN110022579A (en) * | 2019-04-23 | 2019-07-16 | 重庆邮电大学 | Content caching management method based on base station collaboration |
CN110248206A (en) * | 2019-07-29 | 2019-09-17 | 北京邮电大学 | A kind of resource allocation methods, device and electronic equipment for edge network system |
CN110611698A (en) * | 2019-08-07 | 2019-12-24 | 哈尔滨工业大学(深圳) | A flexible cooperative transmission method and system based on random edge cache and realistic conditions |
CN112073275A (en) * | 2020-09-08 | 2020-12-11 | 广西民族大学 | Content distribution method and device for ultra-dense network UDN |
CN112601256A (en) * | 2020-12-07 | 2021-04-02 | 广西师范大学 | MEC-SBS clustering-based load scheduling method in ultra-dense network |
CN112995979A (en) * | 2021-03-04 | 2021-06-18 | 中国科学院计算技术研究所 | Wireless network cache recommendation method for QoE (quality of experience) requirements of user |
CN114245422A (en) * | 2021-12-06 | 2022-03-25 | 北方工业大学 | Edge active caching method based on intelligent sharing in cluster |
CN114567898A (en) * | 2022-03-09 | 2022-05-31 | 大连理工大学 | Federal learning-based clustering cooperative caching method in ultra-dense network |
CN117320112A (en) * | 2023-10-26 | 2023-12-29 | 陕西思极科技有限公司 | Dual-mode communication network energy consumption balancing method and system |
US20240305849A1 (en) * | 2023-03-09 | 2024-09-12 | Zhejiang Lab | Content caching method for satellite terrestrial integrated network with differentiated interest and access mode |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2434835A1 (en) * | 2010-09-22 | 2012-03-28 | Deutsche Telekom AG | Coordinated multi-point transmission COMP data and signalling on X2-interface using additional VLAN identifier |
CN102695227A (en) * | 2012-06-04 | 2012-09-26 | 中国科学技术大学 | Method for cooperatively transmitting data by home enhanced Node B (HeNB) and HeNB |
US8862814B2 (en) * | 2011-08-10 | 2014-10-14 | International Business Machines Corporation | Video object placement for cooperative caching |
CN105939388A (en) * | 2016-06-28 | 2016-09-14 | 华为技术有限公司 | Method for pushing business content and content controller |
CN106792995A (en) * | 2016-12-27 | 2017-05-31 | 北京邮电大学 | The user access method of content low time delay transmission is ensured in a kind of following 5G networks |
CN107493328A (en) * | 2017-08-14 | 2017-12-19 | 武汉大学 | A kind of Cooperative caching method of feature based fusion |
CN107548102A (en) * | 2017-08-16 | 2018-01-05 | 北京邮电大学 | The node B cache method of user's time delay is minimized in a kind of edge cache network |
CN108322352A (en) * | 2018-03-19 | 2018-07-24 | 北京工业大学 | It is a kind of based on the honeycomb isomery caching method to cooperate between group |
-
2018
- 2018-09-21 CN CN201811110511.3A patent/CN109194763B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2434835A1 (en) * | 2010-09-22 | 2012-03-28 | Deutsche Telekom AG | Coordinated multi-point transmission COMP data and signalling on X2-interface using additional VLAN identifier |
US8862814B2 (en) * | 2011-08-10 | 2014-10-14 | International Business Machines Corporation | Video object placement for cooperative caching |
CN102695227A (en) * | 2012-06-04 | 2012-09-26 | 中国科学技术大学 | Method for cooperatively transmitting data by home enhanced Node B (HeNB) and HeNB |
CN105939388A (en) * | 2016-06-28 | 2016-09-14 | 华为技术有限公司 | Method for pushing business content and content controller |
CN106792995A (en) * | 2016-12-27 | 2017-05-31 | 北京邮电大学 | The user access method of content low time delay transmission is ensured in a kind of following 5G networks |
CN107493328A (en) * | 2017-08-14 | 2017-12-19 | 武汉大学 | A kind of Cooperative caching method of feature based fusion |
CN107548102A (en) * | 2017-08-16 | 2018-01-05 | 北京邮电大学 | The node B cache method of user's time delay is minimized in a kind of edge cache network |
CN108322352A (en) * | 2018-03-19 | 2018-07-24 | 北京工业大学 | It is a kind of based on the honeycomb isomery caching method to cooperate between group |
Non-Patent Citations (3)
Title |
---|
XI LI等: "An Energy-Efficient Sleep Management Algorithm for UDN with Edge Caching", 《2017 IEEE GLOBECOM WORKSHOPS (GC WKSHPS)》 * |
XIANGYANG ZHANG等: "Clustered device-to-device caching based on file preferences", 《2016 IEEE 27TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR, AND MOBILE RADIO COMMUNICATIONS (PIMRC)》 * |
郭铮: "基于缓存的云接入无线网络研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109600780B (en) * | 2019-02-18 | 2021-10-29 | 南京邮电大学 | A file copy caching method based on base station clustering |
CN109600780A (en) * | 2019-02-18 | 2019-04-09 | 南京邮电大学 | A kind of duplicate of the document caching method based on base station sub-clustering |
CN110022579A (en) * | 2019-04-23 | 2019-07-16 | 重庆邮电大学 | Content caching management method based on base station collaboration |
CN110248206A (en) * | 2019-07-29 | 2019-09-17 | 北京邮电大学 | A kind of resource allocation methods, device and electronic equipment for edge network system |
CN110611698A (en) * | 2019-08-07 | 2019-12-24 | 哈尔滨工业大学(深圳) | A flexible cooperative transmission method and system based on random edge cache and realistic conditions |
CN112073275A (en) * | 2020-09-08 | 2020-12-11 | 广西民族大学 | Content distribution method and device for ultra-dense network UDN |
CN112073275B (en) * | 2020-09-08 | 2022-06-21 | 广西民族大学 | Content distribution method and device for ultra-dense network UDN |
CN112601256B (en) * | 2020-12-07 | 2022-07-15 | 广西师范大学 | A load scheduling method based on MEC-SBS clustering in ultra-dense networks |
CN112601256A (en) * | 2020-12-07 | 2021-04-02 | 广西师范大学 | MEC-SBS clustering-based load scheduling method in ultra-dense network |
CN112995979B (en) * | 2021-03-04 | 2022-01-25 | 中国科学院计算技术研究所 | Wireless network cache recommendation method for QoE (quality of experience) requirements of user |
CN112995979A (en) * | 2021-03-04 | 2021-06-18 | 中国科学院计算技术研究所 | Wireless network cache recommendation method for QoE (quality of experience) requirements of user |
CN114245422A (en) * | 2021-12-06 | 2022-03-25 | 北方工业大学 | Edge active caching method based on intelligent sharing in cluster |
CN114567898A (en) * | 2022-03-09 | 2022-05-31 | 大连理工大学 | Federal learning-based clustering cooperative caching method in ultra-dense network |
CN114567898B (en) * | 2022-03-09 | 2024-01-02 | 大连理工大学 | Clustering collaborative caching method based on federal learning under ultra-dense network |
US20240305849A1 (en) * | 2023-03-09 | 2024-09-12 | Zhejiang Lab | Content caching method for satellite terrestrial integrated network with differentiated interest and access mode |
CN117320112A (en) * | 2023-10-26 | 2023-12-29 | 陕西思极科技有限公司 | Dual-mode communication network energy consumption balancing method and system |
CN117320112B (en) * | 2023-10-26 | 2024-05-03 | 陕西思极科技有限公司 | Dual-mode communication network energy consumption balancing method and system |
Also Published As
Publication number | Publication date |
---|---|
CN109194763B (en) | 2020-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109194763B (en) | A caching method based on self-organized cooperation of small base stations in ultra-dense networks | |
CN109413724B (en) | MEC-based task unloading and resource allocation scheme | |
CN111314889B (en) | Task unloading and resource allocation method based on mobile edge calculation in Internet of vehicles | |
CN111447619B (en) | A method for joint task offloading and resource allocation in mobile edge computing networks | |
CN112492626B (en) | A method for uninstalling computing tasks for mobile users | |
CN107333267B (en) | A kind of edge calculations method for 5G super-intensive networking scene | |
CN110505644B (en) | User task unloading and resource allocation joint optimization method | |
CN111372314A (en) | Task unloading method and task unloading device based on mobile edge computing scene | |
CN110493757B (en) | Mobile edge computing unloading method for reducing system energy consumption under single server | |
CN107466099B (en) | Interference management self-optimization method based on non-orthogonal multiple access | |
CN111132191A (en) | Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server | |
CN111800812B (en) | Design method of user access scheme applied to mobile edge computing network of non-orthogonal multiple access | |
Zhang et al. | DMRA: A decentralized resource allocation scheme for multi-SP mobile edge computing | |
CN110290507A (en) | A caching strategy and spectrum allocation method for a D2D communication-assisted edge caching system | |
CN106230953A (en) | A kind of D2D communication means based on distributed storage and device | |
Khan et al. | On the application of agglomerative hierarchical clustering for cache-assisted D2D networks | |
CN107734482B (en) | Content Distribution Method Based on D2D and Service Offloading | |
CN109673018A (en) | Novel cache contents in Wireless Heterogeneous Networks are placed and content caching distribution optimization method | |
CN112437156B (en) | Distributed cooperative caching method based on MEC-D2D | |
CN115278779A (en) | Dynamic placement method of VR service module based on rendering perception in MEC network | |
CN117851356A (en) | A UAV-assisted caching strategy method and system based on deep Q network | |
CN111432380B (en) | A Cache Optimization Method for D2D Auxiliary Data Offloading | |
CN109068356A (en) | A kind of wireless cache allocation method in cognitive radio networks | |
CN114448991B (en) | A method, system, medium, device and terminal for selecting a multi-edge server | |
CN114760692A (en) | Resource allocation method based on hybrid clustering in ultra-dense network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |