Efficient First-Order Methods for Large-Scale Stochastic Convex Optimization
Recorded 11 June 2012 in Lausanne, Vaud, Switzerland
Event: SuRI - I&C - Summer Research Institute
Recent ubiquity of big data sets has rendered traditional convex optimization procedures such as interior point methods completely impractical. The current methods of choice for solving large-scale convex programs in practice are the first-order methods (FOMs), which only rely on very minimal first-order information (viz. subgradients). Randomization plays a crucial role in these methods, whether as part of the input or as a means of acceleration. Research in FOMs has been growing explosively of late, and a well-developed complexity theory for various kinds of optimization problems has emerged.
In this talk I will survey the state-of-the-art in FOMs (specifically focusing on stochastic convex optimization), provide upper and lower bounds for various classes of problems, and indicate future avenues for research in this exciting and very applicable area
Watched 1060 times.Watch