Sparsity and convergence analysis of generalized conditional gradient methods
In this talk we introduce suitable generalized conditional gradient algorithms for solving variational inverse problems consisting in the sum of a smooth fidelity term and a convex, coercive regularizer. We exploit the sparse structure of the variational problem by designing iterates as suitable linear combinations of the extremal points of the ball of the regularizer and we prove sublinear convergence of the algorithm. Then, under some further structural assumptions, we show how to improve the rate of convergence by lifting the problem to the space of measures supported on the above-mentioned extremal points and using known convergence results for generalized conditional gradient methods for the total variation minimization. Finally, we apply our algorithm to solve dynamic inverse problems regularized with optimal transport energies.
I am a Newton International Fellow at the University of Cambridge working in the CIA (Cambridge Image Analysis) group. Previously, I have been a postdoctoral researcher at Karl-Franzens University in Graz and at the University of Würzburg. I obtained my PhD in 2017 from the Max-Planck Institute for Mathematics in the Science in Leipzig.
2021-11-24 at 3:00 pm