Estimation of high-dimensional low-rank matrices

Link:
Autor/in:
Erscheinungsjahr:
2011
Medientyp:
Text
Schlagworte:
  • Empirical process
  • High-dimensional low-rank matrices
  • Penalized least-squares estimator
  • Quasi-convex Schatten class embeddings
  • Schatten norm
  • Sparse recovery
Beschreibung:
  • Suppose that we observe entries or, more generally, linear combinations of entries of an unknown m x T-matrix A corrupted by noise. We are particularly interested in the high-dimensional setting where the number mT of unknown entries can be much larger than the sample size N. Motivated by several applications, we consider estimation of matrix A under the assumption that it has small rank. This can be viewed as dimension reduction or sparsity assumption. In order to shrink toward a low-rank representation, we investigate penalized least squares estimators with a Schatten-p quasi-norm penalty term, p <= 1. We study these estimators under two possible assumptions-a modified version of the restricted isometry condition and a uniform bound on the ratio ``empirical norm induced by the sampling operator/Frobenius norm.{''} The main results are stated as nonasymptotic upper bounds on the prediction risk and on the Schatten-q risk of the estimators, where q is an element of {[}p, 2]. The rates that we obtain for the prediction risk are of the form rmIN (for m = T), up to logarithmic factors, where r is the rank of A. The particular examples of multi-task learning and matrix completion are worked out in detail. The proofs are based on tools from the theory of empirical processes. As a by-product, we derive bounds for the kth entropy numbers of the quasi-convex Schatten class embeddings, S-p(M) -> S-2(M) p < 1, which are of independent interest.
Lizenz:
  • info:eu-repo/semantics/openAccess
Quellsystem:
Forschungsinformationssystem der UHH

Interne Metadaten
Quelldatensatz
oai:www.edit.fis.uni-hamburg.de:publications/0b586c88-de1d-4882-a92b-da4579694d5c