1932

Abstract

Historically, scalability has been a major challenge for the successful application of semidefinite programming in fields such as machine learning, control, and robotics. In this article, we survey recent approaches to this challenge, including those that exploit structure (e.g., sparsity and symmetry) in a problem, those that produce low-rank approximate solutions to semidefinite programs, those that use more scalable algorithms that rely on augmented Lagrangian techniques and the alternating-direction method of multipliers, and those that trade off scalability with conservatism (e.g., by approximating semidefinite programs with linear and second-order cone programs). For each class of approaches, we provide a high-level exposition, an entry point to the corresponding literature, and examples drawn from machine learning, control, or robotics. We also present a list of software packages that implement many of the techniques discussed in the review. Our hope is that this article will serve as a gateway to the rich and exciting literature on scalable semidefinite programming for both theorists and practitioners.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-control-091819-074326
2020-05-03
2024-04-20
Loading full text...

Full text loading...

/deliver/fulltext/control/3/1/annurev-control-091819-074326.html?itemId=/content/journals/10.1146/annurev-control-091819-074326&mimeType=html&fmt=ahah

Literature Cited

  1. 1. 
    Goemans MX, Williamson DP 1995. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM 42:1115–45
    [Google Scholar]
  2. 2. 
    Lovász L 1979. On the Shannon capacity of a graph. IEEE Trans. Inf. Theory 25:1–7
    [Google Scholar]
  3. 3. 
    Recht B, Fazel M, Parrilo PA 2010. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52:471–501
    [Google Scholar]
  4. 4. 
    d'Aspremont A, Ghaoui LE, Jordan MI, Lanckriet GR 2007. A direct formulation for sparse PCA using semidefinite programming. SIAM Rev. 49:434–48
    [Google Scholar]
  5. 5. 
    Hall G 2018. Optimization over nonnegative and convex polynomials with and without semidefinite programming PhD Thesis, Princeton Univ., Princeton, NJ
  6. 6. 
    Lasserre JB 2001. Global optimization with polynomials and the problem of moments. SIAM J. Optim. 11:796–817
    [Google Scholar]
  7. 7. 
    Parrilo PA 2003. Semidefinite programming relaxations for semialgebraic problems. Math. Progr. 96:293–320
    [Google Scholar]
  8. 8. 
    Ahmadi AA, Majumdar A 2019. DSOS and SDSOS optimization: more tractable alternatives to sum of squares and semidefinite optimization. SIAM J. Appl. Algebr. Geom. 3:192–230
    [Google Scholar]
  9. 9. 
    Parrilo PA 2000. Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization PhD Thesis, Calif. Inst. Technol., Pasadena
  10. 10. 
    Vandenberghe L, Boyd S 1996. Semidefinite programming. SIAM Rev. 38:49–95
    [Google Scholar]
  11. 11. 
    Renegar J 2019. Accelerated first-order methods for hyperbolic programming. Math. Progr. 173:1–35
    [Google Scholar]
  12. 12. 
    Arora S, Hazan E, Kale S 2005. Fast algorithms for approximate semidefinite programming using the multiplicative weights update method. 46th Annual IEEE Symposium on Foundations of Computer Science339–48 Piscataway, NJ: IEEE
    [Google Scholar]
  13. 13. 
    Ding L, Yurtsever A, Cevher V, Tropp JA, Udell M 2019. An optimal-storage approach to semidefinite programming using approximate complementarity. arXiv:1902.03373 [math.OC]
    [Google Scholar]
  14. 14. 
    Henrion D, Malick J 2011. Projection methods for conic feasibility problems: applications to polynomial sum-of-squares decompositions. Optim. Methods Softw. 26:23–46
    [Google Scholar]
  15. 15. 
    Yurtsever A, Fercoq O, Cevher V 2019. A conditional gradient-based augmented Lagrangian framework. arXiv:1901.04013 [math.OC]
    [Google Scholar]
  16. 16. 
    Nie J, Wang L 2012. Regularization methods for SDP relaxations in large-scale polynomial optimization. SIAM J. Optim. 22:408–28
    [Google Scholar]
  17. 17. 
    Bertsimas D, Freund RM, Sun XA 2013. An accelerated first-order method for solving SOS relaxations of unconstrained polynomial optimization problems. Optim. Methods Softw. 28:424–41
    [Google Scholar]
  18. 18. 
    Hopkins SB, Schramm T, Shi J, Steurer D 2016. Fast spectral algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors. Proceedings of the Forty-Eighth Annual ACM Symposium on Theory of Computing178–91 New York: ACM
    [Google Scholar]
  19. 19. 
    Sun J 2019. Provable nonconvex methods/algorithms. https://sunju.org/research/nonconvex
    [Google Scholar]
  20. 20. 
    Blekherman G, Parrilo PA, Thomas Reds. 2013. Semidefinite Optimization and Convex Algebraic Geometry Philadelphia: Soc. Ind. Appl. Math.
  21. 21. 
    Hall G 2019. Engineering and business applications of sum of squares polynomials. arXiv:1906.07961 [math.OC]
    [Google Scholar]
  22. 22. 
    Putinar M 1993. Positive polynomials on compact semi-algebraic sets. Indiana Univ. Math. J. 42:969–84
    [Google Scholar]
  23. 23. 
    Agler J, Helton W, McCullough S, Rodman L 1988. Positive semidefinite matrices with a given sparsity pattern. Linear Algebra Appl. 107:101–49
    [Google Scholar]
  24. 24. 
    Zheng Y, Mason RP, Papachristodoulou A 2017. Scalable design of structured controllers using chordal decomposition. IEEE Trans. Autom. Control 63:752–67
    [Google Scholar]
  25. 25. 
    Grone R, Johnson CR, EM, Wolkowicz H 1984. Positive definite completions of partial Hermitian matrices. Linear Algebra Appl. 58:109–24
    [Google Scholar]
  26. 26. 
    Fukuda M, Kojima M, Murota K, Nakata K 2001. Exploiting sparsity in semidefinite programming via matrix completion I: general framework. SIAM J. Optim. 11:647–74
    [Google Scholar]
  27. 27. 
    Kim S, Kojima M, Mevissen M, Yamashita M 2011. Exploiting sparsity in linear and nonlinear matrix inequalities via positive semidefinite matrix completion. Math. Progr. 129:33–68
    [Google Scholar]
  28. 28. 
    Fujisawa K, Kim S, Kojima M, Okamoto Y, Yamashita M 2009. User's manual for SparseCoLO: conversion methods for sparse conic-form linear optimization problems Res. Rep. B-453, Dep. Math. Comput. Sci., Tokyo Inst. Technol., Tokyo
  29. 29. 
    Mason RP, Papachristodoulou A 2014. Chordal sparsity, decomposing SDPs and the Lyapunov equation. 2014 American Control Conference531–37 Piscataway, NJ: IEEE
    [Google Scholar]
  30. 30. 
    Zheng Y, Mason RP, Papachristodoulou A 2016. A chordal decomposition approach to scalable design of structured feedback gains over directed graphs. 2016 IEEE 55th IEEE Conference on Decision and Control6909–14 Piscataway, NJ: IEEE
    [Google Scholar]
  31. 31. 
    Zheng Y, Fantuzzi G, Papachristodoulou A, Goulart P, Wynn A 2020. Chordal decomposition in operator-splitting methods for sparse semidefinite programs. Math. Progr. 180:489–532
    [Google Scholar]
  32. 32. 
    Burer S 2003. Semidefinite programming in the space of partial positive semidefinite matrices. SIAM J. Optim. 14:139–72
    [Google Scholar]
  33. 33. 
    Andersen MS, Dahl J, Vandenberghe L 2010. Implementation of nonsymmetric interior-point methods for linear optimization over sparse matrix cones. Math. Progr. Comput. 2:167–201
    [Google Scholar]
  34. 34. 
    Sun Y, Andersen MS, Vandenberghe L 2014. Decomposition in conic optimization with partially separable structure. SIAM J. Optim. 24:873–97
    [Google Scholar]
  35. 35. 
    Kalbat A, Lavaei J 2015. A fast distributed algorithm for decomposable semidefinite programs. 2015 54th IEEE Conference on Decision and Control1742–49 Piscataway, NJ: IEEE
    [Google Scholar]
  36. 36. 
    Madani R, Kalbat A, Lavaei J 2015. ADMM for sparse semidefinite programming with applications to optimal power flow problem. 2015 54th IEEE Conference on Decision and Control5932–39 Piscataway, NJ: IEEE
    [Google Scholar]
  37. 37. 
    Sun Y, Vandenberghe L 2015. Decomposition methods for sparse matrix nearness problems. SIAM J. Matrix Anal. Appl. 36:1691–717
    [Google Scholar]
  38. 38. 
    Andersen MS, Pakazad SK, Hansson A, Rantzer A 2014. Robust stability analysis of sparsely interconnected uncertain systems. IEEE Trans. Autom. Control 59:2151–56
    [Google Scholar]
  39. 39. 
    Benson SJ, Ye Y 2006. DSDP5 user guide—software for semidefinite programming Tech. Rep. ANL/MCS-TM-277, Argonne Natl. Lab., Lemont, IL
  40. 40. 
    Weisser T, Lasserre JB, Toh KC 2018. Sparse-BSOS: a bounded degree SOS hierarchy for large scale polynomial optimization with sparsity. Math. Progr. Comput. 10:1–32
    [Google Scholar]
  41. 41. 
    Lasserre JB, Toh KC, Yang S 2017. A bounded degree SOS hierarchy for polynomial optimization. EURO J. Comput. Optim. 5:87–117
    [Google Scholar]
  42. 42. 
    Mangelson JG, Liu J, Eustice RM, Vasudevan R 2018. Guaranteed globally optimal planar pose graph and landmark SLAM via sparse-bounded sums-of-squares programming. arXiv:1809.07744 [cs.RO]
    [Google Scholar]
  43. 43. 
    Thrun S, Burgard W, Fox D 2005. Probabilistic Robotics Cambridge, MA: MIT Press
  44. 44. 
    Rosen DM, Carlone L, Bandeira AS, Leonard JJ 2019. SE-Sync: a certifiably correct algorithm for synchronization over the special Euclidean group. Int. J. Robot. Res. 38:95–125
    [Google Scholar]
  45. 45. 
    Gatermann K, Parrilo PA 2004. Symmetry groups, semidefinite programs, and sums of squares. J. Pure Appl. Algebra 192:95–128
    [Google Scholar]
  46. 46. 
    Vallentin F 2009. Symmetry in semidefinite programs. Linear Algebra Appl. 430:360–69
    [Google Scholar]
  47. 47. 
    De Klerk E 2010. Exploiting special structure in semidefinite programming: a survey of theory and applications. Eur. J. Oper. Res. 201:1–10
    [Google Scholar]
  48. 48. 
    Permenter FN 2017. Reduction methods in semidefinite and conic optimization PhD Thesis, Mass. Inst. Technol., Cambridge
  49. 49. 
    Henrion D, Garulli Aeds. 2005. Positive Polynomials in Control Berlin: Springer
  50. 50. 
    Cogill R, Lall S, Parrilo PA 2008. Structured semidefinite programs for the control of symmetric systems. Automatica 44:1411–17
    [Google Scholar]
  51. 51. 
    Arcak M, Meissen C, Packard A 2016. Networks of Dissipative Systems: Compositional Certification of Stability, Performance, and Safety Cham, Switz.: Springer
  52. 52. 
    Drusvyatskiy D, Wolkowicz H 2017. The many faces of degeneracy in conic optimization. Found. Trends Optim. 3:77–170
    [Google Scholar]
  53. 53. 
    Borwein J, Wolkowicz H 1981. Regularizing the abstract convex program. J. Math. Anal. Appl. 83:495–530
    [Google Scholar]
  54. 54. 
    Pataki G 2013. Strong duality in conic linear programming: facial reduction and extended duals. Computational and Analytical Mathematics DH Bailey, HH Bauschke, P Borwein, F Garvan, M Thérapp. 613–34 New York: Springer
    [Google Scholar]
  55. 55. 
    Kungurtsev V, Marecek J 2018. A two-step pre-processing for semidefinite programming. arXiv:1806.10868 [math.OC]
    [Google Scholar]
  56. 56. 
    Waki H, Ebihara Y, Sebe N 2016. Reduction of SDPs in H∞ control of SISO systems and performance limitations analysis. 2016 55th IEEE Conference on Decision and Control646–51 Piscataway, NJ: IEEE
    [Google Scholar]
  57. 57. 
    Krislock N, Wolkowicz H 2010. Explicit sensor network localization using semidefinite representations and facial reductions. SIAM J. Optim. 20:2679–708
    [Google Scholar]
  58. 58. 
    Zhu Y, Pataki G, Tran-Dinh Q 2019. Sieve-SDP: a simple facial reduction algorithm to preprocess semidefinite programs. Math. Program. Comput. 11:503–86
    [Google Scholar]
  59. 59. 
    Barvinok AI 1995. Problems of distance geometry and convex properties of quadratic maps. Discrete Comput. Geom. 13:189–202
    [Google Scholar]
  60. 60. 
    Pataki G 1998. On the rank of extreme matrices in semidefinite programs and the multiplicity of optimal eigenvalues. Math. Oper. Res. 23:339–58
    [Google Scholar]
  61. 61. 
    Lemon A, So AMC, Ye Y 2016. Low-rank semidefinite programming: theory and applications. Found. Trends Optim. 2:1–156
    [Google Scholar]
  62. 62. 
    Candès EJ, Plan Y 2010. Matrix completion with noise. Proc. IEEE 98:925–36
    [Google Scholar]
  63. 63. 
    Bennett J, Lanning S 2007. The Netflix Prize. Proceedings of KDD Cup and Workshop 20073–6 New York: ACM
    [Google Scholar]
  64. 64. 
    Kulis B, Surendran AC, Platt JC 2007. Fast low-rank semidefinite programming for embedding and clustering. Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics235–42 Proc. Mach. Learn. Res. Vol. 2. N.p.: PMLR
    [Google Scholar]
  65. 65. 
    Burer S, Monteiro RD 2003. A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Math. Progr. 95:329–57
    [Google Scholar]
  66. 66. 
    Liu DC, Nocedal J 1989. On the limited memory BFGS method for large scale optimization. Math. Progr. 45:503–28
    [Google Scholar]
  67. 67. 
    Burer S, Monteiro RD 2005. Local minima and convergence in low-rank semidefinite programming. Math. Progr. 103:427–44
    [Google Scholar]
  68. 68. 
    Boumal N, Absil PA 2011. RTRMC: a Riemannian trust-region method for low-rank matrix completion. Advances in Neural Information Processing Systems 24 J Shawe-Taylor, RS Zemel, PL Bartlett, F Pereira, KQ Weinberger406–14 Red Hook, NY: Curran
    [Google Scholar]
  69. 69. 
    Absil PA, Baker CG, Gallivan KA 2007. Trust-region methods on Riemannian manifolds. Found. Comput. Math. 7:303–30
    [Google Scholar]
  70. 70. 
    Absil PA, Mahony R, Sepulchre R 2009. Optimization Algorithms on Matrix Manifolds Princeton, NJ: Princeton Univ. Press
  71. 71. 
    Journée M, Bach F, Absil PA, Sepulchre R 2010. Low-rank optimization on the cone of positive semidefinite matrices. SIAM J. Optim. 20:2327–51
    [Google Scholar]
  72. 72. 
    Boumal N, Voroninski V, Bandeira A 2016. The non-convex Burer-Monteiro approach works on smooth semidefinite programs. Advances in Neural Information Processing Systems 29 DD Lee, M Sugiyama, UV Luxburg, I Guyon, R Garnett2757–65 Red Hook, NY: Curran
    [Google Scholar]
  73. 73. 
    Boumal N, Voroninski V, Bandeira AS 2018. Deterministic guarantees for Burer-Monteiro factorizations of smooth semidefinite programs. arXiv:1804.02008 [math.OC]
    [Google Scholar]
  74. 74. 
    Boumal N, Absil PA, Cartis C 2018. Global rates of convergence for nonconvex optimization on manifolds. IMA J. Numer. Anal. 39:1–33
    [Google Scholar]
  75. 75. 
    Sato H, Iwai T 2015. A new, globally convergent Riemannian conjugate gradient method. Optimization 64:1011–31
    [Google Scholar]
  76. 76. 
    Boumal N, Mishra B, Absil PA, Sepulchre R 2014. Manopt, a MATLAB toolbox for optimization on manifolds. J. Mach. Learn. Res. 15:1455–59
    [Google Scholar]
  77. 77. 
    Mei S, Misiakiewicz T, Montanari A, Oliveira RI 2017. Solving SDPs for synchronization and maxcut problems via the Grothendieck inequality. arXiv:1703.08729 [math.OC]
    [Google Scholar]
  78. 78. 
    Sun J, Qu Q, Wright J 2018. A geometric analysis of phase retrieval. Found. Comput. Math. 18:1131–98
    [Google Scholar]
  79. 79. 
    Sun J, Qu Q, Wright J 2016. Complete dictionary recovery over the sphere II: recovery by Riemannian trust-region method. IEEE Trans. Inf. Theory 63:885–914
    [Google Scholar]
  80. 80. 
    Carlone L, Rosen DM, Calafiore G, Leonard JJ, Dellaert F 2015. Lagrangian duality in 3D SLAM: verification techniques and optimal solutions. 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems125–32 Piscataway, NJ: IEEE
    [Google Scholar]
  81. 81. 
    Carlone L, Calafiore GC, Tommolillo C, Dellaert F 2016. Planar pose graph optimization: duality, optimal solutions, and verification. IEEE Trans. Robot. 32:545–65
    [Google Scholar]
  82. 82. 
    Erdogdu MA, Ozdaglar A, Parrilo PA, Vanli ND 2018. Convergence rate of block-coordinate maximization Burer-Monteiro method for solving large SDPs. arXiv:1807.04428 [math.OC]
    [Google Scholar]
  83. 83. 
    Wang PW, Chang WC, Kolter JZ 2017. The mixing method: coordinate descent for low-rank semidefinite programming. arXiv:1706.00476 [math.OC]
    [Google Scholar]
  84. 84. 
    Frank M, Wolfe P 1956. An algorithm for quadratic programming. Naval Res. Logist. Q. 3:95–110
    [Google Scholar]
  85. 85. 
    Hazan E 2008. Sparse approximate solutions to semidefinite programs. Latin American Symposium on Theoretical Informatics ES Laber, C Bornstein, LT Nogueira, L Faria301–16 Berlin: Springer
    [Google Scholar]
  86. 86. 
    Gärtner B, Matousek J 2012. Approximation Algorithms and Semidefinite Programming Berlin: Springer
  87. 87. 
    Yurtsever A, Udell M, Tropp JA, Cevher V 2017. Sketchy decisions: convex low-rank matrix optimization with optimal storage. arXiv:1702.06838 [math.OC]
    [Google Scholar]
  88. 88. 
    Freund RM, Grigas P, Mazumder R 2017. An extended Frank–Wolfe method with in-face directions, and its application to low-rank matrix completion. SIAM J. Optim. 27:319–46
    [Google Scholar]
  89. 89. 
    O'Donoghue B, Chu E, Parikh N, Boyd S 2016. Conic optimization via operator splitting and homogeneous self-dual embedding. J. Optim. Theory Appl. 169:1042–68
    [Google Scholar]
  90. 90. 
    Zhao XY, Sun D, Toh KC 2010. A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20:1737–65
    [Google Scholar]
  91. 91. 
    Yang L, Sun D, Toh KC 2015. SDPNAL+: a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints. Math. Progr. Comput. 7:331–66
    [Google Scholar]
  92. 92. 
    Boyd S, Parikh N, Chu E, Peleato B, Eckstein J 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3:1–122
    [Google Scholar]
  93. 93. 
    Sturm J 2001. SeDuMi. Lehigh University http://sedumi.ie.lehigh.edu
    [Google Scholar]
  94. 94. 
    Mosek 2019. Introducing the MOSEK Optimization Suite 9.0.105. Mosek https://docs.mosek.com/9.0/intro/index.html
    [Google Scholar]
  95. 95. 
    Candès EJ, Li X, Ma Y, Wright J 2011. Robust principal component analysis?. J. ACM 58:11
    [Google Scholar]
  96. 96. 
    Toh KC, Tütüncü RH, Todd MJ SDPT3 - a MATLAB software package for semidefinite-quadratic-linear programming. Carnegie Mellon University http://www.math.cmu.edu/∼reha/sdpt3.html
    [Google Scholar]
  97. 97. 
    Rockafellar RT 1976. Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1:97–116
    [Google Scholar]
  98. 98. 
    Peng J, Wei Y 2007. Approximating K-means-type clustering via semidefinite programming. SIAM J. Optim. 18:186–205
    [Google Scholar]
  99. 99. 
    Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D 2013. Intriguing properties of neural networks. arXiv:1312.6199 [cs.CV]
    [Google Scholar]
  100. 100. 
    Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A 2016. The limitations of deep learning in adversarial settings. 2016 IEEE European Symposium on Security and Privacy372–87 Piscataway, NJ: IEEE
    [Google Scholar]
  101. 101. 
    Zheng S, Song Y, Leung T, Goodfellow I 2016. Improving the robustness of deep neural networks via stability training. 2016 IEEE Conference on Computer Vision and Pattern Recognition4480–88 Piscataway, NJ: IEEE
    [Google Scholar]
  102. 102. 
    Moosavi-Dezfooli SM, Fawzi A, Fawzi O, Frossard P 2017. Universal adversarial perturbations. 2017 IEEE Conference on Computer Vision and Pattern Recognition86–94 Piscataway, NJ: IEEE
    [Google Scholar]
  103. 103. 
    Tjeng V, Xiao K, Tedrake R 2017. Evaluating robustness of neural networks with mixed integer programming. arXiv:1711.07356 [cs.LG]
    [Google Scholar]
  104. 104. 
    Wong E, Kolter JZ 2017. Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv:1711.00851 [cs.LG]
    [Google Scholar]
  105. 105. 
    Liu C, Arnon T, Lazarus C, Barrett C, Kochenderfer MJ 2019. Algorithms for verifying deep neural networks. arXiv:1903.06758 [cs.LG]
    [Google Scholar]
  106. 106. 
    Pulina L, Tacchella A 2012. Challenging SMT solvers to verify neural networks. AI Commun. 25:117–35
    [Google Scholar]
  107. 107. 
    Xiang W, Musau P, Wild AA, Lopez DM, Hamilton N 2018. Verification for machine learning, autonomy, and neural networks survey. arXiv:1810.01989 [cs.AI]
    [Google Scholar]
  108. 108. 
    Wang S, Pei K, Whitehouse J, Yang J, Jana S 2018. Efficient formal safety analysis of neural networks. Advances in Neural Information Processing Systems 31 S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, R Garnett6367–77 Red Hook, NY: Curran
    [Google Scholar]
  109. 109. 
    Raghunathan A, Steinhardt J, Liang PS 2018. Semidefinite relaxations for certifying robustness to adversarial examples. Advances in Neural Information Processing Systems 31 S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, R Garnett10877–87 Red Hook, NY: Curran
    [Google Scholar]
  110. 110. 
    Qin C, O'Donoghue B, Bunel R, Stanforth R, Gowal S 2019. Verification of non-linear specifications for neural networks. arXiv:1902.09592 [cs.LG]
    [Google Scholar]
  111. 111. 
    Fazlyab M, Morari M, Pappas GJ 2019. Safety verification and robustness analysis of neural networks via quadratic constraints and semidefinite programming. arXiv:1903.01287 [math.OC]
    [Google Scholar]
  112. 112. 
    Geršgorin SA 1931. Über die Abgrenzung der Eigenwerte einer Matrix. Bull. Acad. Sci. URSS Classe Sci. Math. 6:749–54
    [Google Scholar]
  113. 113. 
    Ahmadi AA, Hall G 2017. Sum of squares basis pursuit with linear and second order cone programming. Algebraic and Geometric Methods in Discrete Mathematics HA Harrington, M Omar, M Wright27–53 Providence, RI: Am. Math. Soc.
    [Google Scholar]
  114. 114. 
    Majumdar A, Ahmadi AA, Tedrake R 2014. Control and verification of high-dimensional systems with DSOS and SDSOS programming. 53rd IEEE Conference on Decision and Control, pp. 394–401 Piscataway, NJ: IEEE
    [Google Scholar]
  115. 115. 
    Ahmadi AA, Majumdar A 2016. Some applications of polynomial optimization in operations research and real-time decision making. Optim. Lett. 10:709–29
    [Google Scholar]
  116. 116. 
    Dür M 2010. Copositive programming – a survey. Recent Advances in Optimization and Its Applications in Engineering M Diehl, F Glineur, E Jarlebring, W Michiels3–20 Berlin: Springer
    [Google Scholar]
  117. 117. 
    Ahmadi AA, Hall G 2019. On the construction of converging hierarchies for polynomial optimization based on certificates of global positivity. Math. Oper. Res. 44:1148–509
    [Google Scholar]
  118. 118. 
    Ahmadi AA, Dash S, Hall G 2017. Optimization over structured subsets of positive semidefinite matrices via column generation. Discrete Optim. 24:129–51
    [Google Scholar]
  119. 119. 
    Barker G, Carlson D 1975. Cones of diagonally dominant matrices. Pac. J. Math. 57:15–32
    [Google Scholar]
  120. 120. 
    Löfberg J 2004. YALMIP: a toolbox for modeling and optimization in MATLAB. 2004 IEEE International Symposium on Computer Aided Control Systems Design284–89 Piscataway, NJ: IEEE
    [Google Scholar]
  121. 121. 
    Diamond S, Boyd S 2016. CVXPY: a Python-embedded modeling language for convex optimization. J. Mach. Learn. Res. 17:83
    [Google Scholar]
  122. 122. 
    Stahlberg M 2018. PICOS. GitLab https://picos-api.gitlab.io/picos
    [Google Scholar]
  123. 123. 
    Posa M 2013. Systems Polynomial Optimization Toolbox. GitHub https://github.com/mposa/spot
    [Google Scholar]
  124. 124. 
    Dunning I, Huchette J, Lubin M 2017. JuMP: a modeling language for mathematical optimization. SIAM Rev. 59:295–320
    [Google Scholar]
  125. 125. 
    O'Donoghue B, Chu E, Parikh N, Boyd S 2017. SCS. GitHub https://github.com/cvxgrp/scs
    [Google Scholar]
  126. 126. 
    Townsend J, Koep N, Weichwald S 2016. Pymanopt: a Python toolbox for optimization on manifolds using automatic differentiation. J. Mach. Learn. Res. 17:4755–59
    [Google Scholar]
  127. 127. 
    Permenter F, Parrilo P 2018. Partial facial reduction: simplified, equivalent SDPs via approximations of the PSD cone. Math. Progr. 171:1–54
    [Google Scholar]
  128. 128. 
    Papachristodoulou A, Anderson J, Valmorbida G, Prajna S, Seiler P, Parrilo PA 2013. SOSTOOLS: sum of squares optimization toolbox for MATLAB. Caltech http://www.cds.caltech.edu/sostools
    [Google Scholar]
  129. 129. 
    JuliaOpt 2019. Sum of squares programming for Julia. GitHub https://github.com/JuliaOpt/SumOfSquares.jl
    [Google Scholar]
/content/journals/10.1146/annurev-control-091819-074326
Loading
/content/journals/10.1146/annurev-control-091819-074326
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error