Abstract
Bandwidth allocation problems over wireless local area network have attracted extensive research recently due to the rapid growth in the number of users and bandwidth-intensive applications. In this paper, a bandwidth allocation problem over wireless local area network with directed topologies is investigated and the global objective function of the problem consists of local downloading and uploading cost with both constraints of feasible allocation region and network resources. An improved high-efficiency gradient-push algorithm is proposed for the bandwidth allocation problem which not only guarantees successful data transmission but also minimizes the global objective function. Compared with the existing distributed algorithms, firstly, we use weighted running average bandwidth to replace the current state variables which can ensure the solution converge to the optimal value asymptotically with probability one. Next, noisy gradient samples are used in the proposed algorithm instead of accurate gradient information which enhances the robustness and expands the scope of application. Theoretical analysis shows the convergence rate of the time-averaged value to the optimal solution. Finally, numerical examples are presented to validate the proposed algorithm.
Similar content being viewed by others
References
Zhao, Y., Jiang, H., Zhou, K., et al.: Dream-(l) g: a distributed grouping-based algorithm for resource assignment for bandwidth-intensive applications in the cloud. IEEE Trans. Parallel Distrib. 27(12), 3469–3484 (2016)
Dai, X., Wang, X., Liu, N.: Optimal scheduling of data-intensive applications in cloud-based video distribution services. IEEE Trans. Circuits Syst. Video Technol. 27(1), 73–83 (2017)
Zhang, H., Liu, H., Cheng, J., et al.: Downlink energy efficiency of power allocation and wireless backhaul bandwidth allocation in heterogeneous small cell networks. IEEE Trans. Commun. 66(4), 1705–1716 (2018)
López-Pérez, D., Chu, X., Vasilakos, A.V., et al.: Power minimization based resource allocation for interference mitigation in OFDMA femtocell networks. IEEE J. Sel. Area Commun. 32(2), 333–344 (2014)
Jiang, C., Zhang, H., Ren, Y., et al.: Energy-efficient non-cooperative cognitive radio networks: micro, meso, and macro views. IEEE Commun. Mag. 52(7), 14–20 (2014)
Xu, C., Sheng, M., Yang, C., et al.: Pricing-based multiresource allocation in OFDMA cognitive radio networks: an energy efficiency perspective. IEEE Trans. Veh. Technol. 63(5), 2336–2348 (2014)
Yang, K., Ou, S., Guild, K., Chen, H.: Convergence of ethernet PON and IEEE 802.16 broadband access networks and its QoS-aware dynamic bandwidth allocation scheme. IEEE J. Sel Area Commun. 27(2), 101–116 (2009)
Delicado, F.M., Cuenca, P., Orozco-BarbosaL, L.: QoS mechanisms for multimedia communications over TDMA/TDD WLANs. Comput. Commun. 29(13–14), 2721–2735 (2006)
Treust, M., Lasaulce, S.: A repeated game formulation of energy-efficient decentralized power control. IEEE Trans. Commun. 9(9), 2860–2869 (2010)
Buzzi, S., Saturnino, D.: A game-theoretic approach to energy-efficient power control and receiver design in cognitive CDMA wireless networks. IEEE J. STSP 5(1), 137–150 (2011)
Yuan, W., Wang, P., Liu, W., et al.: Variable-width channel allocation for access points: a game-theoretic perspective. IEEE Trans. Mobile Comput. 12(7), 1428–1442 (2013)
Song, T., Kim, T.Y., Kim, W., et al.: Adaptive and distributed radio resource allocation in densely deployed wireless LANs: a game-theoretic approach. IEEE Trans Veh. Technol. 67(5), 4466–4475 (2018)
Lei, J., Chen, H.F., Fang, H.T.: Primal-dual algorithm for distributed constrained optimization. Syst. Control Lett. 96, 110–117 (2016)
Xi, C., Khan, U.A.: Distributed subgradient projection algorithm over directed graphs. IEEE Trans. Automat. Contr. 62(8), 3986–3992 (2017)
Cai, K., Ishii, H.: Average consensus on general strongly connected digraphs. Automatica 48(11), 2750–2761 (2012)
Xi, C., Mai, V.S., Xin, R., et al.: Linear convergence in optimization over directed graphs with row-stochastic matrices. IEEE Trans. Automat. Contr. 63(10), 3558–3565 (2018)
Nedić, A., OlshevskyA, A.: Distributed optimization over time-varying directed graphs. IEEE Trans. Automat. Contr. 60(3), 601–615 (2015)
Deming, Y., Shengyuan, X., Junwei, L.: Gradient-free method for distributed multi-agent optimization via push-sum algorithms. Int. J. Robust Nonlin. 25, 1569–1580 (2015)
Nedić, A., Olshevsky, A.: Stochastic gradient-push for strongly convex functions on time-varying directed graphs. IEEE Trans. Automat. Contr. 61(12), 3936–3947 (2016)
Xiuxian, L., Xinlei, Y., Lihua, X.: Distributed online optimization for multi-agent networks with coupled inequality constraints. arXiv preprint arXiv.1805.05573 (2018)
Nedić, A., Ozdaglar, A., Parrilo, P.A.: Constrained consensus and optimization in multi-user networks. IEEE Trans. Automat. Contr. 55(4), 922–938 (2010)
Khan, S.U., Ahmad, I.: Comparison and analysis of ten static heuristics-based Internet data replication techniques. J. Parallel Distrib. Comput. 68(2), 113–136 (2008)
Acknowledgements
This research is supported by National Natural Science Foundation of China under Grant nos. 61673219 and 61673214, 13th Five-Year Plan for Equipment Pre-research on Common Technology under Grant no. 41412040101, Tianjin Major Projects of Science and Technology under Grant no. 15ZXZNGX00250.
Author information
Authors and Affiliations
Corresponding author
Additional information
Jyh-Horng Chou.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Summary of Notations
See Table 2.
Appendix B: Proof of Theorem 3.1
For any user \(i \in v\), it can be obtained by Lemma 3.2:
The last term \( - 2c(t) \cdot {s_i}{(t + 1)^T}\left( {{\mathbf{x}_i}(t) - \mathbf{x}} \right) \) can be transformed as:
From (17), take conditional expectation with respect to \({F_t}\) on both sides of (16), we obtain:
We denote \({B(t)} = \max \left\{ {\dfrac{{{B_y}}}{{{\beta (t)}{r^2}}},\dfrac{{{B_y}}}{{{r^3}}}} \right\} \), \(B(t) \le \dfrac{{{B_y}}}{{\beta (t) \cdot {r^2}}}\). From Lemma 3, \(\left\| {{{\mathbf{z}}_i}(t)} \right\| \le B(t)\) and \({\left\| {{{\mathbf{s}}_i}(t + 1)} \right\| ^2} \le {\left( {{C_f} + {C_g}B(t) + {C_n}} \right) ^2}\). Since \(\left\| {{N_i}(\mathbf{x}_i(t))} \right\| \le {C_n}\) and \(E\left[ {{{N_i}(\mathbf{x}_i(t))}|{F_t}} \right] = 0\), from the law of total expectation, following inequality holds with probability one.
From (10) and Lemma 3.2, the following inequality can be obtained.
From the convexity of norm and \({\hat{\varvec{\mu }} _i}(t + 1) = \sum \nolimits _{j \in N_i^{in}(t)} {{a_{ij}}(t) \cdot {{\varvec{\mu }} _j}(t)}\), we have
Since \(B(t) \le \dfrac{{{B_y}}}{{\beta (t) \cdot {r^2}}}\), \({\left\| {\dfrac{{{{\hat{\mathbf{y}}}_i}(t + 1)}}{{w_i^2(t + 1)}} - \beta (t) \cdot \dfrac{{{{\hat{\varvec{\mu }} }_i}(t + 1)}}{{{w_i}(t + 1)}}} \right\| ^2} \le \dfrac{{4B_y^2}}{{{r^6}}}\).
We denote \({\bar{\mathbf{y}}}(t) = \dfrac{1}{m}\sum \nolimits _{i = 1}^m {{{\mathbf{y}}_i}(t)}\), the upper bound of the third term of (20) is
For \({\left\| {{{\hat{\varvec{\mu }} }_i}(t)} \right\| ^2} - w_i^2(t + 1){\left\| {\mathbf{z}} \right\| ^2} \le 2\hat{\varvec{\mu }} _i^T(t)\left( {{{\hat{\varvec{\mu }} }_i}(t) - {w_i}(t + 1) \cdot {\mathbf{z}}} \right) \), the last term of (20) satisfies
For \({{\mathbf{y}}_i}(0) = {g_i}\left( {{x_i}(0)} \right) \), \(\left\| {\bar{\mathbf{y}}(t)} \right\| \le {B_g}\), we obtain
Let \({\mathbf{x}} = {{\mathbf{x}}^*}\), where \({{\mathbf{x}}^*}: = \mathop {\min }\limits _{{\mathbf{x}} \in X} \sum \nolimits _{i = 1}^m {{f_i}({\mathbf{x}})}\), \(\sum \nolimits _{i = 1}^m {{g_i}({\mathbf{x}}_i^*)} \le 0\). From the definition of \({L_i}({\mathbf{x}},{\mathbf{z}})\), \(\sum \nolimits _{i = 1}^m {E\left[ {{L_i}\left( {{{\mathbf{x}}_i}(t),{\mathbf{z}}} \right) - {L_i}\left( {{{\mathbf{x}}^*},\bar{\varvec{\mu }} (t)} \right) } \right] } \ge \sum \nolimits _{i = 1}^m E\left[ {f_i}\left( {{{\mathbf{x}}_i}(t)} \right) \right. \left. + {{\mathbf{z}}^T} \cdot {g_i}\left( {{{\mathbf{x}}_i}(t)} \right) - {f_i}\left( {{{\mathbf{x}}^*}} \right) \right] \) holds. Thus,
Let \({\mathbf{z}} = \mathbf{0}\) and sum up \(c(t) \cdot \sum \nolimits _{i = 1}^m {E[{f_i}({{\mathbf{x}}_i}(t)) - {f_i}({{\mathbf{x}}^*})]} \) from \(t=1\) to \(t=T\).
We use \({B_\mu }\) to denote the maximal value of \(\left\| {{{\varvec{\mu }} _i}(1)} \right\| \), \(\forall i \in v\), i.e. \({B_\mu } = \mathop {\max }\limits _{\forall i \in v} \left\{ {\max \left\{ {\dfrac{{{w_i}(2){B_y}}}{{a \cdot \beta (1) \cdot {r^2}}},\dfrac{{{w_i}(2){B_y}}}{{a \cdot {r^3}}}} \right\} } \right\} \). The first term of (26) satisfies
To determine the upper bound of the third term of (26), let \({{\varvec{\varepsilon }} _{{y_i}}}(t + 1) = {g_i}\left( {{{\tilde{\mathbf{x}}}_i}(t + 1)} \right) - {g_i}\left( {{{\tilde{\mathbf{x}}}_i}(t)} \right) \), from Lemma 3.1, we obtain
where the term \(\left\| {{{\varvec{\varepsilon }} _{{y_i}}}(t + 1)} \right\| \) satisfies \(\left\| {{{\varvec{\varepsilon }}_{{y_i}}}(t + 1)} \right\| = \left\| {{g_i}\left( {{{\mathbf{x}}_i}(t + 1)} \right) - {g_i}\left( {{{\mathbf{x}}_i}(t)} \right) } \right\| \le {C_g} \cdot \left\| {{{\mathbf{x}}_i}(t + 1) - {{\mathbf{x}}_i}(t)} \right\| \). For \(\left\| {{{\mathbf{x}}_i}(t + 1) - {{\mathbf{x}}_i}(t)} \right\| \le c(t) \cdot \left\| {{{\mathbf{s}}_i}(t + 1)} \right\| \le c(t) \cdot \left( {{C_f} + {C_g} \cdot {B_t} + {C_n}} \right) \) and \(\left\| {{{\varvec{\varepsilon }} _{{y_i}}}(t + 1)} \right\| \le c(t) \cdot {C_g} \cdot \left( {{C_f} + \dfrac{{{C_g} \cdot {B_y}}}{{\beta (t){r^2}}} + {C_n}} \right) \), by Jensen inequality relation, \({\left\| {{{\varvec{\varepsilon }} _{{y_i}}}(t + 1)} \right\| _1} \le \sqrt{2} \cdot c(t) \cdot {C_g} \cdot \left( {{C_f} + \dfrac{{{C_g} \cdot {B_y}}}{{\beta (t){r^3}}} + {C_n}} \right) \) holds. Since \(\left\{ {c(t)} \right\} \) and \(\left\{ {\dfrac{{c(t)}}{{\beta (t)}}} \right\} \) are non-increasing sequences, \(\sum \nolimits _{t = 1}^T {\sum \nolimits _{s = 1}^t {{\lambda ^{t - s}}} } \cdot {c^2}(s) \le \dfrac{1}{{1 - \lambda }}\sum \nolimits _{t = 1}^T {{c^2}(t)}\) and \(\sum \nolimits _{t = 1}^T {\sum \nolimits _{s = 1}^t {{\lambda ^{t - s}}} } \cdot \dfrac{{{c^2}(s)}}{{\beta (s)}} \le \dfrac{1}{{1 - \lambda }}\sum \nolimits _{t = 1}^T {\dfrac{{{c^2}(t)}}{{\beta (t)}}}\). Then we obtain
The forth term of (26) satisfies \(\sum \nolimits _{t = 1}^T {\sum \nolimits _{i = 1}^m {\dfrac{1}{2}\{ {{\left\| {{{\mathbf{x}}_i}(t) - {\mathbf{x}}_i^*} \right\| }^2} - {{\left\| {{{\mathbf{x}}_i}(t + 1) - {\mathbf{x}}_i^*} \right\| }^2}\} } } \le 2B_X^2 \cdot m\).
To determine the upper bound of the fifth term of (26), we let \({{\varvec{\varepsilon }} _{{\mu _i}}}(t + 1) = {\left[ {{{\hat{\varvec{\mu }} }_i}(t + 1) + {c_t} \cdot {g_i}\left( {{{\mathbf{x}}_i}(t + 1)} \right) } \right] _ + } - {\hat{\varvec{\mu }} _i}(t + 1)\), then \({{\varvec{\mu }} _i}(t + 1) = {\hat{\varvec{\mu }} _i}(t + 1) + {{\varvec{\varepsilon }} _{{\mu _i}}}(t + 1)\). From Lemma 3.1, we obtain
Assume that \({{\varvec{\mu }} _i}(0) \in {\mathbb {R}^n_+ }\) holds for any user \(i \in v\), and \({\hat{\varvec{\mu }} _i}(t)\) is a convex combination of \({{\varvec{\mu }} _i}(t)\) for any user \(i \in v\), \({\hat{\varvec{\mu }} _i}(t) \in {\mathbb {R}^n_+ }\) holds. From Lemma 3.3, we have
Then \({\left\| {{{{ \varepsilon }}_{{\mu _i}}}(t + 1)} \right\| _1} \le \dfrac{{2\sqrt{2} c(t){B_y}}}{{{r^3}}}\). Obviously, there exists a scalar \(\hat{\mu }\) such that \({\left\| {{{\varvec{\mu }} _i}(0)} \right\| _1} \le \hat{\mu }\) holds for any user \(i \in v\). From (30), we obtain \( \sum \nolimits _{t = 1}^T {\sum \nolimits _{i = 1}^m {c(t) \cdot \left\| {{{\mathbf{z}}_i}(t + 1) - \bar{\varvec{\mu }} (t)} \right\| } } \le \dfrac{{8m \cdot c(1) \cdot \hat{\mu }}}{{\delta (1 - \lambda )}} + \dfrac{{16\sqrt{2} m{B_y}}}{{\delta (1 - \lambda ){r^3}}}\sum \nolimits _{t = 1}^T {{c^2}(t)} \); thus, the fifth term of inequality (26) is bounded.
The upper bound of the final term of (26) can be obtained.
Then the following inequality relation holds.
Since \({\tilde{\mathbf{x}}_i}(t + 1)\) is a convex combination of past values of \({{\mathbf{x}}_i}(t)\), \({\tilde{\mathbf{x}}_i}(t + 1) \in {X_i}\) holds for any \(t > 0\), \(\sum \nolimits _{i = 1}^m {{f_i}\left( {{{\tilde{\mathbf{x}}}_i}(t + 1)} \right) } \le \sum \nolimits _{i = 1}^m {\dfrac{{\sum \nolimits _{r = 1}^{t+1} {c(r) \cdot {f_i}\left( {{{\mathbf{x}}_i}(r)} \right) } }}{{\sum \nolimits _{r = 0}^t {c(r)} }}}\). This implies
From (35), we obtain
Since the step-size sequences \(\left\{ {c(t)} \right\} \), \(\left\{ {\beta (t)} \right\} \) satisfies \(c(t) > 0\), \(\mathop {\lim }\limits _{t \rightarrow 0} \sum \nolimits _{r = 0}^t {c(r)} = \infty \), \(\mathop {\lim }\limits _{t \rightarrow \infty } \sum \nolimits _{r = 0}^t {{c^2}(r)} < \infty \), \(\sum \nolimits _{t = 1}^\infty {\dfrac{{{c^2}(t)}}{{\beta (t)}}} < \infty \) and \(\sum \nolimits _{t = 1}^\infty {\dfrac{{{c^2}(t)}}{{\beta ^2 (t)}}} < \infty \),
\(\mathop {\lim }\limits _{T \rightarrow \infty } \sum \nolimits _{i = 1}^m {E\left[ {{f_i}\left( {{{\tilde{\mathbf{x}}}_i}(t + 1)} \right) - {f_i}\left( {{{{\mathbf{x}}_i}^*}} \right) } \right] = 0}\) holds, Theorem 3.1 is proved.
Rights and permissions
About this article
Cite this article
Shi, Z., Zhou, C. An Improved Distributed Gradient-Push Algorithm for Bandwidth Resource Allocation over Wireless Local Area Network. J Optim Theory Appl 183, 1153–1176 (2019). https://doi.org/10.1007/s10957-019-01588-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-019-01588-7
Keywords
- Bandwidth-intensive applications
- Bandwidth allocation
- Distributed optimization
- Directed communication topology
- Noisy gradient sample
- Push-sum protocol