BIG & QUIC: Sparse inverse covariance estimation for a million variables
Recorded 11 July 2013 in Lausanne, Vaud, Switzerland
Event: Spars 2013 - Signal Processing with Adaptive Sparse Structured Representations
The L1-regularized Gaussian maximum likelihood estimator has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix even under high-dimensional settings. However, it requires solving a difficult non-smooth log-determinant program with number of parameters scaling quadratically with the number of Gaussian variables. State-of-the-art methods thus do not scale to problems with more than 20,000 variables. In this talk, I will describe a quadratic approximation method that can solve 1-million dimensional L1-regularized log determinant problems (which would thus have 1000 billion parameters) on a single computer. In order to do so, we carefully exploit the underlying structure of the problem. Our innovations include (i) a second-order Newton-like method, (ii) division of the variables into free and fixed sets, (iii) a block co-ordinate descent method, and (iv) a memory efficient scheme that approximates key quantities within the algorithm. In spite of the latter modification, we are able to theoretically analyze our procedure and show that the proposed BIGQUIC algorithm can achieve super-linear or even quadratic convergence rates. Experimental results using synthetic and real application data demonstrate the improvements in performance over other state-of-the-art methods.
Watched 791 times.Watch