#### Preview

We address the problem of recovering a sparse *n*-vector within a given subspace. This problem is a subtask of some approaches to dictionary learning and sparse principal component analysis. Hence, if we can prove scaling laws for recovery of sparse vectors, it will be easier to derive and prove recovery results in these applications. In this paper, we present a scaling law for recovering the sparse vector from a subspace that is spanned by the sparse vector and *k* random vectors. We prove that the sparse vector will be the output to one of *n* linear programs with high probability if its support size *s* satisfies [math]. The scaling law still holds when the desired vector is approximately sparse. To get a single estimate for the sparse vector from the *n* linear programs, we must select which output is the sparsest. This selection process can be based on any proxy for sparsity, and the specific proxy has the potential to improve or worsen the scaling law. If sparsity is interpreted in an ℓ_{1}/ℓ_{∞} sense, then the scaling law cannot be better than [math]. Computer simulations show that selecting the sparsest output in the ℓ_{1}/ℓ_{2} or thresholded-ℓ_{0} senses can lead to a larger parameter range for successful recovery than that given by the ℓ_{1}/ℓ_{∞} sense.

*Keywords: *
sparsity;
linear programming;
signal recovery;
sparse principal component analysis;
dictionary learning

*Journal Article.*
*0 words.*

*Subjects: *
Science and Mathematics
;
Mathematics
;
Applied Mathematics
;
Computer Science

Go to Oxford Journals » home page

Full text: subscription required

How to subscribe Recommend to my Librarian

Users without a subscription are not able to see the full content. Please, subscribe or login to access all content. subscribe or purchase to access all content.