-
Views
-
Cite
Cite
Kabir Aladin Chandrasekher, Mengqi Lou, Ashwin Pananjady, Alternating minimization for generalized rank-1 matrix sensing: sharp predictions from a random initialization, Information and Inference: A Journal of the IMA, Volume 13, Issue 3, September 2024, iaae025, https://doi.org/10.1093/imaiai/iaae025
- Share Icon Share
Abstract
We consider the problem of estimating the factors of a rank-|$1$| matrix with i.i.d. Gaussian, rank-|$1$| measurements that are nonlinearly transformed and corrupted by noise. Considering two prototypical choices for the nonlinearity, we study the convergence properties of a natural alternating update rule for this non-convex optimization problem starting from a random initialization. We show sharp convergence guarantees for a sample-split version of the algorithm by deriving a deterministic one-step recursion that is accurate even in high-dimensional problems. Notably, while the infinite-sample population update is uninformative and suggests exact recovery in a single step, the algorithm—and our deterministic one-step prediction—converges geometrically fast from a random initialization. Our sharp, non-asymptotic analysis also exposes several other fine-grained properties of this problem, including how the nonlinearity and noise level affect convergence behaviour. On a technical level, our results are enabled by showing that the empirical error recursion can be predicted by our deterministic one-step updates within fluctuations of the order |$n^{-1/2}$| when each iteration is run with |$n$| observations. Our technique leverages leave-one-out tools originating in the literature on high-dimensional |$M$|-estimation and provides an avenue for sharply analyzing complex iterative algorithms from a random initialization in other high-dimensional optimization problems with random data.