[go: up one dir, main page]

Skip to main content
Log in

A Lyapunov Function Construction for a Non-convex Douglas–Rachford Iteration

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

While global convergence of the Douglas–Rachford iteration is often observed in applications, proving it is still limited to convex and a handful of other special cases. Lyapunov functions for difference inclusions provide not only global or local convergence certificates, but also imply robust stability, which means that the convergence is still guaranteed in the presence of persistent disturbances. In this work, a global Lyapunov function is constructed by combining known local Lyapunov functions for simpler, local subproblems via an explicit formula that depends on the problem parameters. Specifically, we consider the scenario, where one set consists of the union of two lines and the other set is a line, so that the two sets intersect in two distinct points. Locally, near each intersection point, the problem reduces to the intersection of just two lines, but globally the geometry is non-convex and the Douglas–Rachford operator multi-valued. Our approach is intended to be prototypical for addressing the convergence analysis of the Douglas–Rachford iteration in more complex geometries that can be approximated by polygonal sets through the combination of local, simple Lyapunov functions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Douglas, J., Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82, 421–439 (1956)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bauschke, H.H., Combettes, P.L., Luke, D.R.: Phase retrieval, error reduction algorithm, and Fienup variants: a view from convex optimization. J. Opt. Soc. Am. A 19(7), 1334–1345 (2002)

    Article  MathSciNet  Google Scholar 

  3. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  4. Aragón Artacho, F.J., Borwein, J.M., Tam, M.K.: Recent results on Douglas–Rachford methods for combinatorial optimization problems. J. Optim. Theory Appl. 163(1), 1–30 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  5. Elser, V., Rankenburg, I., Thibault, P.: Searching with iterated maps. Proc. Natl. Acad. Sci. USA 104(2), 418–423 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  6. Gravel, S., Elser, V.: Divide and concur: a general approach to constraint satisfaction. Phys. Rev. E 78, 036706 (2008)

    Article  Google Scholar 

  7. Goebel, K., Kirk, W.A.: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990)

    Book  MATH  Google Scholar 

  8. Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591–597 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  9. Borwein, J.M., Sims, B.: The Douglas–Rachford algorithm in the absence of convexity. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer Optim. Appl., vol. 49, pp. 93–109. Springer, New York (2011)

  10. Aragón Artacho, F.J., Borwein, J.M.: Global convergence of a non-convex Douglas–Rachford iteration. J. Glob. Optim. 57(3), 753–769 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  11. Benoist, J.: The Douglas–Rachford algorithm for the case of the sphere and the line. J. Glob. Optim. 63(2), 363–380 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Giladi, O.: A remark on the convergence of the Douglas-Rachford iteration in a non-convex setting. Set-Valued Var. Anal. 26(2), 207–225 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  13. Aragón Artacho, F.J., Borwein, J.M., Tam, M.K.: Global behavior of the Douglas–Rachford method for a nonconvex feasibility problem. J. Glob. Optim. 65(2), 309–327 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  14. Bauschke, H.H., Dao, M.N.: On the finite convergence of the Douglas–Rachford algorithm for solving (not necessarily convex) feasibility problems in Euclidean spaces. SIAM J. Optim. 27(1), 507–537 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  15. Borwein, J.M., Lindstrom, S.B., Sims, B., Schneider, A., Skerritt, M.P.: Dynamics of the Douglas–Rachford method for ellipses and p-spheres. Set-Valued Var. Anal. 26(2), 385–403 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  16. Dao, M.N., Tam, M.K.: A Lyapunov-type approach to convergence of the Douglas–Rachford algorithm. J . Global Optim. (2018). https://doi.org/10.1007/s10898-018-0677-3

  17. Hesse, R., Luke, D.R.: Nonconvex notions of regularity and convergence of fundamental algorithms for feasibility problems. SIAM J. Optim. 23(4), 2397–2419 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  18. Phan, H.M.: Linear convergence of the Douglas–Rachford method for two closed sets. Optimization 65(2), 369–385 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  19. Dao, M.N., Phan, H.M.: Linear convergence of projection algorithms. arXiv:1609.00341 (2016)

  20. Dao, M.N., Phan, H.M.: Linear convergence of the generalized Douglas–Rachford algorithm for feasibility problems. J. Global Optim. (2018). https://doi.org/10.1007/s10898-018-0654-x

  21. Li, G., Pong, T.K.: Douglas–Rachford splitting for nonconvex optimization with application to nonconvex feasibility problems. Math. Program. 159(1–2), 371–401 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  22. Bauschke, H.H., Noll, D.: On the local convergence of the Douglas–Rachford algorithm. Arch. Math. (Basel) 102(6), 589–600 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  23. Kellett, C.M., Teel, A.R.: On the robustness of \(\cal{KL}\)-stability for difference inclusions: smooth discrete-time Lyapunov functions. SIAM J. Control Optim. 44(3), 777–800 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  24. The Sage Developers: SageMath, the Sage Mathematics Software System (Version 7.x and 8.0) (2017). http://www.sagemath.org. Accessed 26 Sept 2018

  25. Giladi, O., Rüffer, B.S.: Accompanying code for the paper “A Lyapunov Function Construction for a Non-convex Douglas-Rachford Iteration” (2017). https://github.com/ogiladi/LyapunovFunDR. Accessed 26 Sept 2018

  26. Bauschke, H.H., Bello Cruz, J.Y., Nghia, T.T.A., Phan, H.M., Wang, X.: The rate of linear convergence of the Douglas–Rachford algorithm for subspaces is the cosine of the Friedrichs angle. J. Approx. Theory 185, 63–79 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  27. Kellett, C.M.: Classical converse theorems in Lyapunov’s second method. Discrete Contin. Dyn. Syst. Ser. B 20(8), 2333–2360 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  28. Sontag, E.D.: Mathematical Control Theory. Texts in Applied Mathematics, vol. 6, 2nd edn. Springer, New York (1998)

    Book  MATH  Google Scholar 

  29. Bauschke, H.H., Moursi, W.M.: On the order of the operators in the Douglas–Rachford algorithm. Optim. Lett. 10(3), 447–455 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  30. Giladi, O., Rüffer, B.S.: A Lyapunov function construction for the Douglas–Rachford operator in a non-convex setting. arXiv:1708.08697v2 (2017)

Download references

Acknowledgements

O. Giladi has been supported by ARC Grant DP160101537. B. S. Rüffer has been supported by ARC Grant DP160102138. Both authors would like to thank the anonymous referees for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ohad Giladi.

Additional information

Communicated by Guoyin Li.

Appendix: Proof of Proposition 5.2

Appendix: Proof of Proposition 5.2

We begin by establishing condition (14). The proof for condition (15) is essentially the same and thus omitted for brevity.

1.1 Establishing Condition (14)

We have by Proposition 5.1 that \(T_{A_2,B}\mathbf {x}= \mathbf {p}_2 + \cos \theta _{2}{\mathbf {M}}_{\theta _{2}}\left( \mathbf {x}-\mathbf {p}_2\right) \)\(= \nicefrac {1}{2} \mathbf {e}_1 + \cos \theta _{2}{\mathbf {M}}_{\theta _{2}}\left( \mathbf {x}-\nicefrac {1}{2} \mathbf {e}_1\right) \). If \(\theta _{2}=\nicefrac \pi 2\), then this simplifies further to \(T_{A_2,B}\mathbf {x}= \mathbf {p}_2\), resulting in \(V_{1}(T_{A_2,B}\mathbf {x}) = 1\). We can hence deduce that

$$\begin{aligned} 1= V_1(T_{A_2,B}\mathbf {x}) > \rho V_1(\mathbf {x}) = \rho \Vert \mathbf {x}-\mathbf {p}_{1}\Vert ^{2}\nonumber \\ \iff \quad \mathbf {x}\in B\left( \mathbf {p}_{1},\frac{\sqrt{\rho }}{\rho }\right) . \end{aligned}$$
(31)

Now, if \(\theta _{2}\ne \nicefrac \pi 2\), then we have \(V_1(T_{A_2,B}\mathbf {x}) = \left\| T_{A_2,B}\mathbf {x}- \mathbf {p}_1\right\| ^2 = \left\| T_{A_2,B}\mathbf {x}+ \nicefrac {1}{2} \mathbf {e}_1\right\| ^2 = \left\| \mathbf {e}_1 + \cos \theta _2{\mathbf {M}}_{\theta _{2}}\left( \mathbf {x}-\nicefrac {1}{2} \mathbf {e}_1\right) \right\| ^2 = \left\| \cos \theta _{2}{\mathbf {M}}_{\theta _{2}}\left( \mathbf {x}-\nicefrac {1}{2} \mathbf {e}_1 + S_{\theta _2}\mathbf {e}_1\right) \right\| ^2 = \cos ^2\theta _2\left\| \mathbf {x}-\nicefrac {1}{2} \mathbf {e}_1 +S_{\theta _2}\mathbf {e}_1\right\| ^2\), where \(S_{\theta _2} :=\frac{1}{\cos \theta _2} {\mathbf {M}}_{-\theta _{2}}\) satisfies \(S_{\theta _2}\mathbf {e}_1 = \mathbf {e}_1+\tan \theta _2\mathbf {e}_2\), and so \(-\nicefrac {1}{2} \mathbf {e}_1 + S_{\theta _2}\mathbf {e}_1 = \nicefrac {1}{2} \mathbf {e}_1 + \tan \theta _2\mathbf {e}_2\). The inequality \(V_1(T_{A_2,B}\mathbf {x}) > \rho V_1(\mathbf {x})\) is thus equivalent to \(\cos ^2\theta _2\left\| \mathbf {x}+\nicefrac {1}{2} \mathbf {e}_1 +\tan \theta _2\mathbf {e}_2\right\| ^2 > \rho \left\| \mathbf {x}+\nicefrac {1}{2} \mathbf {e}_1\right\| ^2\). Expanding this gives \(\cos ^2\theta _2\Vert \mathbf {x}+\nicefrac {1}{2} \mathbf {e}_1\Vert ^2\)\(+\)\(2\cos ^2\theta _2\tan \theta _2\langle \mathbf {x}\)\(+\)\(\nicefrac {1}{2} \mathbf {e}_1, \mathbf {e}_2\rangle + \cos ^2\theta _2\tan ^2\theta _2 > \rho \left\| \mathbf {x}+\nicefrac {1}{2} \mathbf {e}_1\right\| ^2\) or, equivalently,

$$\begin{aligned} (\rho -\cos ^2\theta _2)\left\| \mathbf {x}+\nicefrac {1}{2} \mathbf {e}_1\right\| ^2 - 2\cos ^2\theta _2\tan \theta _2\left\langle \mathbf {x}+\nicefrac {1}{2} \mathbf {e}_1, \mathbf {e}_2\right\rangle < \sin ^2\theta _2. \end{aligned}$$
(32)

By assumption, we have \(\rho > \cos ^2\theta _2\). Hence, estimate (32) is equivalent to

$$\begin{aligned} \left\| \mathbf {x}+\nicefrac {1}{2} \mathbf {e}_1\right\| ^2 - 2\frac{\cos ^2\theta _2\tan \theta _2}{\rho -\cos ^2\theta _2}\left\langle \mathbf {x}+\nicefrac {1}{2} \mathbf {e}_1, \mathbf {e}_2\right\rangle < \frac{\sin ^2\theta _2}{\rho -\cos ^2\theta _2}. \end{aligned}$$

Completing the square gives

$$\begin{aligned} \nonumber \left\| \mathbf {x}+\nicefrac {1}{2} \mathbf {e}_1 - \frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\mathbf {e}_2\right\| ^2&< \frac{\sin ^2\theta _2}{\rho -\cos ^2\theta _2}+\frac{\cos ^2\theta _2\sin ^2\theta _2}{(\rho -\cos ^2\theta _2)^2} \\&= \frac{\rho \sin ^2\theta _2}{(\rho -\cos ^2\theta _2)^2}. \end{aligned}$$
(33)

Since \(\theta _2 \in ]0,\pi [\), we have \(\sin \theta _2 > 0\), and so (33) is equivalent to

$$\begin{aligned} \mathbf {x}\in B\left( -\nicefrac {1}{2} \mathbf {e}_1 + \frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\mathbf {e}_2, \frac{\sqrt{\rho }\sin \theta _2}{\rho -\cos ^2\theta _2}\,\right) . \end{aligned}$$

Since \(\mathbf {p}_1 = -\nicefrac {1}{2} \mathbf {e}_1\), this establishes (14), which contains (31) as a special case.

1.2 An Auxiliary Lemma

Before we can proceed with the proof of the proposition, we need the following auxiliary result, which provides a characterization of the set \(D_3\) defined in (9). Note that since \(A_1\) and \(A_2\) are two non-parallel straight lines, their intersection is a single point.

Lemma A.1

Let \(\mathbf {c}\) be the unique intersection point of \(A_1\) and \(A_2\). Then,

$$\begin{aligned} \mathbf {c}=\left( \frac{\sin (\theta _1+\theta _2)}{2\sin (\theta _2-\theta _1)},\frac{\sin \theta _1\sin \theta _2}{\sin (\theta _2-\theta _1)}\right) \end{aligned}$$
(34)

and we have

$$\begin{aligned} D_3 = \left\{ \mathbf {x}\in {\mathbb {R}}^2:\langle \mathbf {x}-\mathbf {c},\mathbf {n}_1\rangle = 0\right\} \cup \left\{ \mathbf {x}\in {\mathbb {R}}^2:\langle \mathbf {x}-\mathbf {c},\mathbf {n}_2\rangle = 0\right\} , \end{aligned}$$

where \(\mathbf {n}_1\), \(\mathbf {n}_2\), are given by

$$\begin{aligned} \mathbf {n}_1&= \left( \cos \left( \frac{\theta _1+\theta _2}{2}\right) ,\sin \left( \frac{\theta _1+\theta _2}{2}\right) \right) , \end{aligned}$$
(35)
$$\begin{aligned} \mathbf {n}_2&= \left( \sin \left( \frac{\theta _1+\theta _2}{2}\right) ,-\cos \left( \frac{\theta _1+\theta _2}{2}\right) \right) . \end{aligned}$$
(36)

Proof

The line \(A_1\) is the collection of all points \(\mathbf {x}\in {\mathbb {R}}^2\) that satisfy \(\langle \mathbf {x},\mathbf {e}_1\rangle \sin \theta _1\)\(\langle \mathbf {x},\mathbf {e}_2\rangle \cos \theta _1\)\(+\)\(\nicefrac {1}{2} \sin \theta _1 = 0\). Similarly, the line \(A_2\) is the collection of all points \(\mathbf {x}\in {\mathbb {R}}^2\) that satisfy \(\langle \mathbf {x},\mathbf {e}_1\rangle \sin \theta _2 - \langle \mathbf {x},\mathbf {e}_2\rangle \cos \theta _2 - \nicefrac {1}{2}\sin \theta _2 = 0\). Solving these two equations implies that the intersection point \(\mathbf {c}\) between \(A_1\) and \(A_2\) is indeed given by (34). Now, the (normalized) normal vectors to the lines splitting the angles between \(A_1\) and \(A_2\) are given by (35) and (36) and this completes the proof of the auxiliary lemma. \(\square \)

We are now in a position to establish condition (16). The proof of condition (17) follows closely that of condition (16) and is thus omitted for reasons of space.

1.3 Establishing Condition (16)

Since the direction vector of \(A_i\) is \((\cos \theta _i, \sin \theta _i)\), \(\mathbf {d}_{i}^{\perp }:=(\sin \theta _i, -\cos \theta _i)\) is a normal vector to \(A_{i}\) and the distance of a point Q to \(A_{i}\) is given by \(|\langle Q-\mathbf {p}_{i},\mathbf {d}_{i}^{\perp }\rangle |\). We compute

$$\begin{aligned} d\left( \mathbf {p}_1+\frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\mathbf {e}_2, A_1\right)&= \left| \frac{\cos \theta _1\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\right| \end{aligned}$$
(37)

and

$$\begin{aligned} d\left( \mathbf {p}_1+\frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\mathbf {e}_2, A_2\right)&= \left| \frac{\cos ^2\theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}+\sin \theta _2\right| . \end{aligned}$$
(38)

Now, since \(\theta _1 \in ]0,\pi /2]\) and \(\theta _2\in ]\theta _{1},\pi [\), we have \(|\cos \theta _1\cos \theta _2| < 1\), \(\cos ^{2}\theta _{i}<1\), \(\sin \theta _2 > 0\), and \((1+\sin \theta _{1})(1+\sin \theta _{2}) > 1\). Since we assumed that \(\rho \ge (1+\sin \theta _{1})(1+\sin \theta _{2})\), we have \(|\cos \theta _1\cos \theta _2|< 1 < \rho \). With these estimates, we can bound (37) generously as

$$\begin{aligned} d\left( \mathbf {p}_1+\frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\mathbf {e}_2, A_1\right) < \frac{\rho \sin \theta _2}{\rho -\cos ^2\theta _2} \end{aligned}$$
(39)

and simplify (38) to

$$\begin{aligned} d\left( \mathbf {p}_1+\frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\mathbf {e}_2, A_2\right) = \sin \theta _2\left( \frac{\cos ^2\theta _2}{\rho -\cos ^2\theta _2} + 1\right) = \frac{\rho \sin \theta _2}{\rho -\cos ^2\theta _2}. \end{aligned}$$
(40)

In light of (39) and (40), \(A_{1}\) is the closer line to \(\mathbf {p}_1+\frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\mathbf {e}_2\), so it follows that \(\mathbf {p}_1+\frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\mathbf {e}_2 \in D_1\). Therefore, in order to prove (16), it is enough to show that

$$\begin{aligned} \frac{\sqrt{\rho }\sin \theta _2}{\rho -\cos ^2\theta _2} \le d\left( \mathbf {p}_1+\frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\mathbf {e}_2, D_3\right) , \end{aligned}$$
(41)

that is, we want the radius of the ball to be smaller than the distance of the center to the boundary of \(D_1\) (which is exactly \(D_3\)). Now, by the auxiliary Lemma A.1, we have

$$\begin{aligned} d\left( \mathbf {p}_1+\frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\mathbf {e}_2, D_3\right) = \min _{i=1,2}\left\{ \left| \left\langle \mathbf {c}-\mathbf {p}_1-\frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\mathbf {e}_2, \mathbf {n}_i\right\rangle \right| \right\} . \end{aligned}$$
(42)

By squaring both sides of (41) and using (42), we need to establish that

$$\begin{aligned} \frac{\rho \sin ^2\theta _2}{(\rho -\cos ^2\theta _2)^2} \le \min _{i=1,2}\Bigg \{\left( \left\langle \mathbf {c}-\mathbf {p}_1-\frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\mathbf {e}_2, \mathbf {n}_i\right\rangle \right) ^{2}\Bigg \}. \end{aligned}$$

Using (34)–(36), as well as standard trigonometric identities, the right-hand side simplifies to

$$\begin{aligned}&= \min \Bigg \{ \Bigg (\frac{\sin \theta _2}{2\sin \left( \frac{\theta _2-\theta _1}{2}\right) } - \frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\sin \left( \frac{\theta _1+\theta _2}{2}\right) \Bigg )^{2}, \end{aligned}$$
(43)
$$\begin{aligned}&\qquad \Bigg (\frac{\sin \theta _2}{2\cos \left( \frac{\theta _2-\theta _1}{2}\right) }+\frac{\cos \theta _2\sin \theta _2}{\rho -\cos ^2\theta _2}\cos \left( \frac{\theta _1+\theta _2}{2}\right) \Bigg )^{2}\,\Bigg \}, \end{aligned}$$
(44)

so that we need to verify two inequalities, both of which can be simplified further. Starting with (43), we take a common denominator and extract common factors. Using that \(0<\theta _{2}<\pi \), \(\sin ^{2}\theta _{2}>0\), and \(\rho >\cos ^{2}\theta _{2}\), as well as trusty trigonometric identities, we simplify (43) to

$$\begin{aligned} \rho ^2 - (2 - 2\sin \theta _1\sin \theta _2)\rho + \cos ^2\theta _1\cos ^2\theta _2 \ge 0. \end{aligned}$$
(45)

A similar argument can be made for (44), which simplifies to

$$\begin{aligned} \rho ^2 - (2 + 2\sin \theta _1\sin \theta _2)\rho + \cos ^2\theta _1\cos ^2\theta _2 \ge 0. \end{aligned}$$
(46)

For \(\rho > 0\), the left-hand side of (45) is greater than the left-hand side of (46). So it is sufficient to verify that (46) holds.

The roots of the quadratic polynomial in \(\rho \) on the left-hand side of (46) are \(\rho _{1,2} = 1+\sin \theta _1\sin \theta _2 \pm (\sin \theta _1 + \sin \theta _2)\), so that (46) holds whenever \(\rho \) is larger or equal to the larger of the two roots, i.e.,

$$\begin{aligned} \rho \ge 1 + \sin \theta _1\sin \theta _2 + \sin \theta _1 + \sin \theta _2 = (1+\sin \theta _1)(1+\sin \theta _2), \end{aligned}$$

which establishes (16).

This completes the proof of the proposition.\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Giladi, O., Rüffer, B.S. A Lyapunov Function Construction for a Non-convex Douglas–Rachford Iteration. J Optim Theory Appl 180, 729–750 (2019). https://doi.org/10.1007/s10957-018-1405-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-018-1405-3

Keywords

Mathematics Subject Classification

Navigation