Abstract
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worst-case analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interior-point methods the overall computational effort is therefore dominated by the least-squares system that must be solved in each iteration. A type of conjugate-gradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugate-gradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration.
We describe in detail how the algorithm works for optimization problems withL Lyapunov inequalities, each of sizem. We prove an overallworst-case operation count of O(m 5.5L1.5). Theaverage-case complexity appears to be closer to O(m 4L1.5). This estimate is justified by extensive numerical experimentation, and is consistent with other researchers' experience with the practical performance of interior-point algorithms for linear programming.
This result means that the computational cost of extending current control theory based on the solution of Lyapunov or Riccatiequations to a theory that is based on the solution of (multiple, coupled) Lyapunov or Riccatiinequalities is modest.
Similar content being viewed by others
References
I. Adler, M.G.C. Resende, G. Veiga and N. Karmarkar, “An implementation of Karmarkar's algorithm for linear programming,”Mathematical Programming 44 (1989) 297–335.
F. Alizadeh, “Combinatorial optimization with interior point methods and semi-definite matrices,” Ph.D. Thesis, Univ. Minnesota, 1991.
F. Alizadeh, “Optimization over the positive-definite cone: interior point methods and combinatorial applications,” in: P. Pardalos, ed.,Advances in Optimization and Parallel Computing (North-Holland, Amsterdam, 1992).
K.M. Anstreicher, “A combined phase I-phase II scaled potential algorithm for linear programming,”Mathematical Programming 52 (1991) 429–439.
W.F. Arnold and A.J. Laub, “Generalized eigenproblem algorithms and software for algebraic Riccati equations,”Proceedings of the IEEE 72 (1984) 1746–1754.
R. Bellman and K. Fan, “On systems of linear inequalities in Hermitian matrix variables,” in: V.L. Klee, ed.,Convexity, Proceedings of Symposia in Pure Mathematics 7 (American Mathematical Society, Providence, RI, 1963) 1–11.
A. Ben Tal and A. Nemirovskii, “Interior point polynomial time methods for truss topology design,” Technical Report 3/92, Faculty of Industrial Engineering and Management, Technion, Israel Institute of Technology, 1992.
S. Boyd and L. El Ghaoui, “Method of centers for minimizing generalized eigenvalues,”Linear Algebra and Applications 188 (1993) 63–111 (special issue on Numerical Linear Algebra Methods in Control, Signals and Systems).
S. Boyd, L. El Ghaoui, E. Feron and V. Balakrishnan,Linear Matrix Inequalities in System and Control Theory, Studies in Applied Mathematics 15 (SIAM, Philadelphia, PA, 1994).
D. Den Hertog, C. Roos and T. Terlaky, “A large-step analytic center method for a class of smooth convex programming problems,”SIAM Journal on Optimization 2 (1992) 55–70.
I. Dikin, “Iterative solution of problems of linear and quadratic programming”,Soviet Mathematics Doklady 8 (1967) 674–675.
R. Fletcher, “A nonlinear programming problem in statistics (educational testing),”SIAM Journal on Scientific and Statistical Computing 2 (1981) 257–267.
R. Fletcher, “Semidefinite matrix constraints in optimization,”SIAM Journal on Control and Optimization 23 (1985) 493–513.
R.M. Freund, “A potential-function reduction algorithm for solving a linear program directly from an infeasible warm start,”Mathematical Programming 52 (1991) 441–466.
J.D. Gardiner, A.J. Laub, J.J. Amato and C.B. Moler, “Solution of the Sylvester matrix equationAXB T+CXD T=E,”ACM Transactions on Mathematical Software 18 (1992) 223–231.
P.E. Gill, W. Murray, M.A. Saunders, J.A. Tomlin and M.H. Wright, “On projected Newton barrier methods for linear programming and an equivalence to Karmarkar's projective method,”Mathematical Programming Studies 36 (1986) 183–209.
D. Goldfarb and S. Mehrotra, “A relaxed version of Karmarkar's method,”Mathematical Programming 40 (1988) 289–315.
D. Goldfarb and S. Mehrotra, “A self-correcting version of Karmarkar's algorithm,”SIAM Journal on Numerical Analysis 26 (1989) 1006–1015.
G. Golub and C. Van Loan,Matrix Computations (Johns Hopkins Univ. Press, Baltimore, MD, 1989).
G. Golub, S. Nash and C. Van Loan, “A Hessenberg-Schur method for the matrix problemAX+XB=C,”IEEE Transactions on Automatic Control 24 (1979) 909–913.
C.C. Gonzaga, “Path-following methods for linear programming,”SIAM Review 34 (1992) 167–224.
C.C. Gonzaga and M.J. Todd, “An\(O(\sqrt n L)\)-iteration large-step primal-dual affine algorithm for linear programming,”SIAM Journal on Optimization 2 (1992) 349–359.
M. Grötschel, L. Lovász and A. Schrijver,Geometric Algorithms and Combinatorial Optimization, Algorithms and Combinatorics 2 (Springer, Berlin, 1988).
F. Jarre, “An interior-point method for minimizing the maximum eigenvalue of a linear combination of matrices,”SIAM Journal on Control and Optimization 31 (1993) 1360–1377.
K. Kim and J.L. Nazareth, “Implementation of a primal null-space affine scaling method and its extensions,” Technical Report, Department of Pure and Applied Mathematics, Washington State Univ., 1992.
I.J. Lustig, “Feasibility issues in a primal-dual interior-point method for linear programming,”Mathematical Programming 49 (1991) 145–162.
S. Mehrotra, “Implementations of affine scaling methods: approximate solutions of systems of linear equations using preconditioned conjugate gradient methods,”ORSA Journal on Computing 4 (1992) 103–118.
R.D.C. Monteiro and I. Adler, “Interior path-following primal-dual algorithms. Part II; Convex quadratic programming,”Mathematical Programming 44 (1989) 43–66.
Yu. Nesterov and A. Nemirovsky,Optimization over Positive Semidefinite Matrices: Mathematical Background and User's Manual (USSR Acad. Sci. Center. Econ. & Math. Inst., Moscow, 1990).
Yu. Nesterov and A. Nemirovsky,Interior-point Polynomial Methods in Convex Programming, Studies in Applied Mathematics 13 (SIAM, Philadelphia, PA, 1994).
M. Overton, “On minimizing the maximum eigenvalue of a symmetric matrix,”SIAM Journal on Matrix Analysis and Applications 9 (1988) 256–268.
M. Overton, “Large-scale optimization of eigenvalues,”SIAM Journal on Optimization 2 (1992) 88–120.
C.C. Paige and M.S. Saunders, “LSQR: An algorithm for sparse linear equations and sparse least squares,”ACM Transactions on Mathematical Software 8 (1982) 43–71.
J.-P. Vial, “Computational experience with a primal-dual interior-point method for smooth convex programming,” Technical Report, Département d'Économie Commerciale et Industrielle, Univ. Genève, 1992.
G.A. Watson, “Algorithms for minimum trace factor analysis,”SIAM Journal on Matrix Analysis and Applications 13 (1992) 1039–1053.
Y. Ye, “An O(n 3L) potential reduction algorithm for linear programming,”Mathematical Programming 50 (1991) 239–258.
Author information
Authors and Affiliations
Additional information
Supported by the Belgian National Fund for Scientific Research (NFWO). Research supported in part by the Belgian program on Interuniversity Attraction Poles (IUAP 17 and 50) initiated by the Belgian State, Prime Minister's Office, Science Policy Programming.
Research supported in part by AFOSR (under F49620-92-J-0013), NSF (under ECS-9222391) and ARPA (under F49620-93-1-0085).
Rights and permissions
About this article
Cite this article
Vandenberghe, L., Boyd, S. A primal—dual potential reduction method for problems involving matrix inequalities. Mathematical Programming 69, 205–236 (1995). https://doi.org/10.1007/BF01585558
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF01585558