
NeurIPS 2021 Lead Author Spotlight
Alejandro Carderera, PhD Machine Learning student
Simple steps are all you need: Frank-Wolfe and generalized self-concordant functions
#OPTIMIZATION
We prove that a simple step size achieves a O(1/t) convergence rate in primal gap and in dual gap for a simple variant of the Frank-Wolfe algorithm when minimizing generalized self-concordant functions over compact convex domains, using only first-order information. Previous approaches achieved a O(1/t) convergence rate in primal gap using first and second-order information.

Q&A with Alejandro Carderera
(click question to show answer)
What motivated your work on this paper?
We wanted to come up with a simple 5-line optimization algorithm that would achieve the same convergence guarantees as more complex algorithms in the literature for minimizing generalized self-concordant functions in the projection-free case.
If readers remember one takeaway from the paper, what should it be and why?
The main takeaway is that simple algorithms are usually more robust than more complex algorithms, and in some special cases they can achieve similar convergence properties!
Were there any “aha” moments or lessons that you’ll use to inform your future work?
When we were scouting the literature we realized that a lot of the algorithms for minimizing this class of functions were complex, they utilized expensive operations, and they achieved sublinear convergence. We realized that with a few simple tricks, and with much cheaper operations we could theoretically prove the same properties, while being computationally cheaper to run.
What are you most excited for at NeurIPS and what do you hope to take away from the experience?
I’m excited to meet other researchers working on similar topics, and I’m sure that new and interesting ideas will spring from these interactions at the conference.
