Abstract
This paper presents an efficient global optimization algorithm to solve a class of linear multiplicative problems (LMP). The algorithm first converts LMP into an equivalent problem (EP) via some variables transformation, and a convex relaxation problem is constructed to derive a lower bound to the optimal value of EP. Consequently, the process of solving LMP is transformed into tackling a series of convex programs. Additionally, a pruning rule is developed to offer a chance to remove the portion of the investigated space which does not contain the optimal solution of EP, and we propose a strategy which provides more choices of the feasible solution to update the upper bound for the optimal value of LMP. We also analyze the convergence of the algorithm and give its complexity result. Finally, our approach has been confirmed to be effective based on numerical results.


Similar content being viewed by others
Data availability
No data was used for the research described in the article.
References
Benson HP, Boger GM (2000) Outcome-space cutting-plane algorithm for linear multiplicative programming. J Optim Theory Appl 104:301–332
Cambini R (2020) Underestimation functions for a rank-two partitioning method. Decis Econ Finan 43(2):465–489
Cambini R, Venturi I (2021) A new solution method for a class of large dimension rank-two nonconvex programs. IMA J Manag Math 32(2):115–137
Cambini R, Riccardi R, Scopelliti D (2023) Solving linear multiplicative programs via branch-and-bound: a computational experience. CMS 20:38. https://doi.org/10.1007/s10287-023-00471-1
Chen YQ, Jiao HW (2009) A nonisolated optimal solution of general linear multiplicative programming problems. Comput Oper Res 36:2573–2579
Gao YL, Xu CX, Yang YJ (2006) An outcome-space finite algorithm for solving linear multiplicative programming. Appl Math Comput 179(2):494–505
Gao YL, Wu GR, Ma WM (2010) A new global optimization approach for convex multiplicative programming. Appl Math Comput 216(4):1206–1218
Goyal V, Genc-Kaya L, Ravi R (2011) An FPTAS for minimizing the product of two nonnegative linear cost functions. Math Program 126(2):401–405
Hou ZS, Liu SY (2023) An accelerating outer space algorithm for globally solving generalized linear multiplicative problems. Numer Algorithm 94:877–904. https://doi.org/10.1007/s11075-023-01523-y
IBM ILOG CPLEX (2013) IBM ILOG CPLEX 12.6 user’s manual for CPLEX, Version 12.10.0.0 Copyright (c), IBM Corp. http://www.cplex.com. Accessed 1 Mar 2023
Jiao HW, Liu SY, Chen YQ (2012) Global optimization algorithm of a generalized linear multiplicative programming. J Appl Math Comput 40:551–568
Jiao HW, Wang WJ, Chen RJ et al (2020) An efficient outer space algorithm for generalized linear multiplicative programming Problem. IEEE Access 8:80629–80637
Jiao HW, Wang WJ, Yin JB, Shang YL (2022) Image space branch-reduction-bound algorithm for globally minimizing a class of multiplicative problems. RAIRO-Oper Res 56(3):1533–1552
Jiao HW, Wang WJ, Shang YL (2023) Outer space branch-reduction-bound algorithm for solving generalized affine multiplicative problems. J Comput Appl Math 419:114784. https://doi.org/10.1016/j.cam.2022.114784
Jiao HW, Li BB, Yang WQ (2024) A criterion-space branch-reduction-bound algorithm for solving generalized multiplicative problems. J Glob Optim. https://doi.org/10.1007/s10898-023-01358-w
Konno H, Yajima Y (1990) Solving rank two bilinear programs by parametric simplex algorithms. Institute of Human and Social Sciences Working Paper IHSS 90-17, Tokyo Institute of Technology, Tokyo, Japan
Liu SY, Zhao YF (2016) An efficient algorithm for globally solving generalized linear multiplicative programming. J Comput Appl Math 296:840–847
Luo HZ, Bai XD, Lim G et al (2019) New global algorithms for quadratic programming with a few negative eigenvalues based on alternative direction method and convex relaxation. Math Program Comput 11:119–171
Matsui T (1996) NP-hardness of linear multiplicative programming and related problems. J Glob Optim 9(2):113–119
McCormick GP (1976) Computability of global solutions to factorable nonconvex programs: Part I—Convex underestimating problems. Math Program 10:147–175
Oliveira RM, Ferreira AVP (2010) An outcome space approach for generalized convex multiplicative programs. J Glob Optim 47:107–118
Shen PP, Huang BD (2020) Global algorithm for solving linear multiplicative programming problems. Optim Lett 14:693–710
Shen PP, Wang CF (2017) Linear decomposition approach for a class of nonconvex programming problems. J Inequal Appl 2017:74. https://doi.org/10.1186/s13660-017-1342-y
Shen PP, Wang KM, Lu T (2020) Outer space branch and bound algorithm for solving linear multiplicative programming problems. J Glob Optim 78:453–482
Shen PP, Wang KM, Lu T (2020) Global optimization algorithm for solving linear multiplicative programming problems. Optimization 71(6):1421–1441
Shen PP, Wu DX, Wang YF (2023) An efficient spatial branch-and-bound algorithm using an adaptive branching rule for linear multiplicative programming. J Comput Appl Math 426:115100. https://doi.org/10.1016/j.cam.2023.115100
Shen PP, Wu DX, Wang KM (2023) Globally minimizing a class of linear multiplicative forms via simplicial branch-and-bound. J Glob Optim 86:303–321. https://doi.org/10.1007/s10898-023-01277-w
Shen PP, Deng YP, Wu DX (2023) A criterion space algorithm for solving linear multiplicative programming problems. Numer Algorithm. https://doi.org/10.1007/s11075-023-01689-5
Tuy H (1998) Convex analysis and global optimization. Kluwer Academic, Dordrecht
Wang CF, Liu SY, Shen PP (2012) Global minimization of a generalized linear multiplicative programming. Appl Math Model 36:2446–2451
Wang CF, Bai YQ, Shen PP (2017) A practicable branch-and-bound algorithm for globally solving multiplicative programming. Optimization 66(3):397–405
Wang CF, Deng YP, Shen PP (2022) A novel convex relaxation-strategy-based algorithm for solving linear multiplicative problems. J Comput Appl Math. https://doi.org/10.1016/j.cam.2021.114080
Yang LP, Shen PP, Pei YG (2014) A global optimization approach for solving generalized nonlinear multiplicative programming problem. Abstr Appl Anal. https://doi.org/10.1155/2014/641909
Yin JB, Jiao HW, Shang YL (2019) Global algorithm for generalized affine multiplicative programming problem. IEEE Access 7:162245–162253
Zhang B, Gao YL, Liu X et al (2020) Output-space branch-and-bound reduction algorithm for a class of linear multiplicative programs. Mathematics 8:315. https://doi.org/10.3390/math8030315
Zhao YF, Liu SY (2016) An efficient method for generalized linear multiplicative programming problem with multiplicative constraints. Springerplus 5(1):1302
Zhao YF, Zhao T (2018) Global optimization for generalized linear multiplicative programming using convex relaxation. Math Probl Eng. https://doi.org/10.1155/2018/9146309
Zhou XG, Cao BY, Wu K (2015) Global optimization method for linear multiplicative programming. Acta Math Appl Sin 31(2):325–334
Zhou HY, Li GH, Gao XL, Hou ZS (2022) Image space accelerating algorithm for solving a class of multiplicative programming problems. Math Probl Eng. https://doi.org/10.1155/2022/1565764
Acknowledgements
We thank each reviewer for their valuable comments and suggestions, which help us to improve the quality of the paper. This research was supported by the National Natural Science Foundation of China (grant number: 12071133;11871196).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no conflict of interest to declare that are relevant to the content of this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work is supported by National Natural Science Foundation of China (Grant numbers: 12071133, 11871196).
Appendix A. The derivation of the relaxations in Cambini et al. (2023)
Appendix A. The derivation of the relaxations in Cambini et al. (2023)
We first report the two linear relaxations offered by Cambini et al. (2023) for Problem 2. To this end, the following 4p linear programs need to be solved:
Then the two linear relaxations of Problem 2 can be expressed as follows:
Next, we will introduce the six convex relaxation programs in Cambini et al. (2023), towards this end, we first rewrite \(\varphi (x)\) as the following three D.C. functions (D.C. function: the difference between two convex functions):
(i) \(\varphi (x)=\sum \nolimits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{2}\sum \nolimits _{i=1}^{p} (c_{i}^{\top }x+d_{i}^{\top }x) ^{2}-\frac{1}{2}\sum \nolimits _{i=1}^{p} ((c_{i}^{\top }x)^2+ (d_{i}^{\top }x)^2)\);
(ii) \(\varphi (x)=\sum \nolimits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{4}\sum \nolimits _{i=1}^{p} (c_{i}^{\top }x+d_{i}^{\top }x) ^{2}-\frac{1}{4}\sum \nolimits _{i=1}^{p} (c_{i}^{\top }x-d_{i}^{\top }x)^2\);
(iii) \(\varphi (x)=\sum \nolimits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{2}\sum \nolimits _{i=1}^{p} ((c_{i}^{\top }x)^2+ (d_{i}^{\top }x)^2)-\frac{1}{2}\sum \nolimits _{i=1}^{p} (c_{i}^{\top }x-d_{i}^{\top }x)^2\).
Based on (i)–(iii), Problem 2 can be relaxed to the following three convex programs:
where \({\underline{\sigma }}_{i}=\min _{{x}\in D}(c_{i}^{\top }x-d_{i}^{\top }x),\) \({\overline{\sigma }}_{i}=\max _{{x}\in D}(c_{i}^{\top }x-d_{i}^{\top }x), i=1,\ldots , p.\)
The last three convex relaxation programs are based on the D.C. forms of the objective function \(\varphi (x)\) in Problem 2 and the eigenvalue decomposition of the matrix in the quadratic representation of \(\varphi (x)\). The details are explained as follows.
Note that the D.C. forms offered by (i)–(ii) can be rewritten as:
(i) \(\varphi (x)=\sum \nolimits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{2}\sum \nolimits _{i=1}^{p} (c_{i}^{\top }x+d_{i}^{\top }x) ^{2}-x^{\top }Q_{1}x\),
(ii) \(\varphi (x)=\sum \nolimits _{i=1}^{p}({c}_{0i}{d}_{0i}+( {c}_{0i}d_{i}+{d}_{0i}c_{i})^{\top }x)+\frac{1}{4}\sum \nolimits _{i=1}^{p} (c_{i}^{\top }x+d_{i}^{\top }x) ^{2}-x^{\top }Q_{2}x\),
where \(Q_{1}=\frac{1}{2}\sum \nolimits _{i=1}^{p}(c_{i}c_{i}^{\top }+d_{i}d_{i}^{\top }),\ Q_{2}=\frac{1}{4}\sum \nolimits _{i=1}^{p}(c_{i}-d_{i})(c_{i}-d_{i})^{\top }\) are symmetric positive semidefinite matrices. Therefore, there exists an orthonormal matrix \({\tilde{U}}\in {\mathbb {R}}^{n\times n}\) with its columns \({\tilde{u}}_{1},\ldots ,{\tilde{u}}_{n}\in {\mathbb {R}}^{n}\), and a diagonal matrix \({\tilde{D}}\in {\mathbb {R}}^{n\times n}\) with its diagonal elements \({\tilde{\lambda }}_{1},\ldots ,{\tilde{\lambda }}_{n}\in {\mathbb {R}}\) such that \(Q_{1}={\tilde{U}}{\tilde{D}}{\tilde{U}}^{\top }\). Let \(\Theta ^{+}=\{i=1,\ldots ,n: {\tilde{\lambda }}_{i}> 0\}\) and \({\tilde{\vartheta }}_{i}=\sqrt{{\tilde{\lambda }}_{i}}\cdot {\tilde{u}}_{i}\) for all \(i\in \Theta ^{+},\) then the convex relaxation of Problem 2 can be obtained as follows:
where \({\tilde{\vartheta }}^{L}_{i}=\min _{{x}\in D}({\tilde{\vartheta }}_{i}^{T}{x}),\) \({\tilde{\vartheta }}^{U}_{i}=\max _{{x}\in D}({\tilde{\vartheta }}_{i}^{T}{x}), \ i\in \Theta ^{+}.\)
Similarly, there exists an orthonormal matrix \({\hat{U}}\in {\mathbb {R}}^{n\times n}\) with its columns \({\hat{u}}_{1},\ldots ,{\hat{u}}_{n}\in {\mathbb {R}}^{n}\), and a diagonal matrix \({\hat{D}}\in {\mathbb {R}}^{n\times n}\) with its diagonal elements \({\hat{\lambda }}_{1},\ldots ,{\hat{\lambda }}_{n}\in {\mathbb {R}}\) such that \(Q_{2}={\hat{U}}{\hat{D}}{\hat{U}}^{\top }\). Let \(\Gamma ^{+}=\{i=1,\ldots ,n: {\hat{\lambda }}_{i}> 0\}\) and \({\hat{\vartheta }}_{i}=\sqrt{{\hat{\lambda }}_{i}}\cdot {\hat{u}}_{i}\) for all \(i\in \Gamma ^{+},\) then we can construct the convex relaxation of Problem 2 as follows:
where \({\hat{\vartheta }}^{L}_{i}=\min _{{x}\in D}({\hat{\vartheta }}_{i}^{T}{x}),\) \({\hat{\vartheta }}^{U}_{i}=\max _{{x}\in D}({\hat{\vartheta }}_{i}^{T}{x}), \ i\in \Gamma ^{+}.\)
To report the last convex relaxation of Problem 2, we rewrite \(\varphi (x)\) as a quadratic function below:
where \({\hat{a}}=\sum _{i=1}^{p}(c_{0i}d_{i}+d_{0i}c_{i}),\ {\hat{a}}_{0}=\sum _{i=1}^{p}c_{0i}d_{0i},\ {\hat{Q}}=\frac{1}{2}\sum _{i=1}^{p}(c_{i}d_{i}^{\top }+d_{i}c_{i}^{\top })\) is a symmetric matrix, thus there exists an orthonormal matrix \({\bar{P}}\in {\mathbb {R}}^{n\times n}\) with its columns \({\bar{p}}_{1},\ldots ,{\bar{p}}_{n}\in {\mathbb {R}}^{n}\), and a diagonal matrix \({\bar{D}}\in {\mathbb {R}}^{n\times n}\) with its diagonal elements \({\bar{\lambda }}_{1},\ldots ,{\bar{\lambda }}_{n}\in {\mathbb {R}}\) such that \({\hat{Q}}={\bar{P}}{\bar{D}}{\bar{P}}^{\top }\). Let \(\Lambda ^{+}=\{i=1,\ldots ,n: {\bar{\lambda }}_{i}> 0\},\ \Lambda ^{-}=\{i=1,\ldots ,n: {\bar{\lambda }}_{i}< 0\}\) and \({\bar{\vartheta }}_{i}=\sqrt{{\bar{\lambda }}_{i}}\cdot {\bar{p}}_{i},\ i\in \Lambda ^{+}\cup \Lambda ^{-}\), then \(x^{\top }{\hat{Q}}x=\sum _{i\in \Lambda ^{+}}({\bar{\vartheta }}_{i}^{\top }x)^{2}-\sum _{i\in \Lambda ^{-}}({\bar{\vartheta }}_{i}^{\top }x)^{2}\), and the convex relaxation of Problem 2 can be obtained as follows:
where \({\bar{\vartheta }}^{L}_{i}=\min _{{x}\in D}({\bar{\vartheta }}_{i}^{T}{x}),\) \({\bar{\vartheta }}^{U}_{i}=\max _{{x}\in D}({\bar{\vartheta }}_{i}^{T}{x}), \ i\in \Lambda ^{-}.\)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Huang, B., Shen, P. An efficient global optimization algorithm for a class of linear multiplicative problems based on convex relaxation. Comp. Appl. Math. 43, 247 (2024). https://doi.org/10.1007/s40314-024-02765-9
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40314-024-02765-9