Résumé : We study the asymptotic behavior of the Forward-Backward algorithm, which is a cornerstone for the resolution of structured optimization problems. It is well-known that, for general convex functions, this algorithm converges in values with a convergence rate O(1/n) if there is minimizers, and can be arbitrarily slow if the problem has no solutions. We show how, by doing simple geometric assumptions, we can guarantee better rates for this algorithm. We will discuss how this geometry can be identified in practice, providing for instance a new sum rule. In particular, this analysis explains why most of the time the Forward-Backward converges linearly, even in the absence of strong convexity.
Lieu : Salle MIP 1R3