logo

What is Compressive Sensing? 📂Statistical Analysis

What is Compressive Sensing?

Definitions

y=Θs \mathbf{y} = \Theta \mathbf{s} When we have ΘRn×p\Theta \in \mathbb{R}^{n \times p} and npn \ll p, meaning that there are infinitely many solutions s\mathbf{s} that satisfy the underdetermined system of matrix equations, then the optimization problem that finds the sparsest solution under the constraint of satisfying this matrix equation is referred to as Compressed Sensing. Minimizes0subject toy=Θs \begin{matrix} \text{Minimize} & \left\| \mathbf{s} \right\|_{0} \\ \text{subject to} & \mathbf{y} = \Theta \mathbf{s} \end{matrix} Here, 0\left\| \cdot \right\|_{0} is defined as the 00-norm, which refers to the number of non-zero elements in a vector.

Description

Compression in Compressed Sensing is also referred to as Compressive Sensing.

Motivation

The motivation behind Compressed Sensing is, of course, sparse regression itself, in which, unlike in ordinary regression analysis, the design matrix Φ\Phi is referred to as the Measurement Matrix, and the vector of regression coefficients s\mathbf{s} is referred to as the Signal1.

The above figure visualizes the solving of ordinary matrix equations versus Compressed Sensing, where the color of pixels represents the magnitude of that component. The lighter the color, the closer it is to 00, and white means it is exactly 00. Although figuring out s\mathbf{s} might be feasible, unlike the left figure, we want to find a s\mathbf{s} that contains as many 00 as possible, as shown in the right figure.

Considering the multiplication of matrices, any component of s\mathbf{s} being 00 means that it’s possible to describe y\mathbf{y} without the corresponding column of Θ\Theta. An increase in 00 elements of s\mathbf{s} implies that Θ\Theta is “compressed” horizontally, and recognizing a necessary column indicates “sensing,” hence the term Compressed Sensing is deemed appropriate.

Difficulty 2

The problem, as mentioned before, is not just satisfying y=Θs\mathbf{y} = \Theta \mathbf{s} but finding the optimal solution s^\hat{\mathbf{s}} where s0\left\| \mathbf{s} \right\|_{0} is minimized. The most brute-force approach would be to try all possible combinations, meaning computing for O(2p)O \left( 2^{p} \right), which is an NP-hard problem.

Obviously, it’s smarter to solve the problem in a wiser way than trying all possibilities, and one idea is to consider using a different norm than the l0l_{0}-norm 0\left\| \cdot \right\|_{0}. For starters, consider the simpler l2l_{2}-norm and l1l_{1}-norm.

Thinking geometrically about the Compressed Sensing problem like the above figure, we need to find the equation of a [line] (../2042) y=Θs\mathbf{y} = \Theta \mathbf{s} that results in a solution with as many elements as possible being 00. This is similar to geometrically explaining how [Lasso regression differs from Ridge regression] (../2571). At least between these two, the l1l_{1}-norm seems more suitable for Compressed Sensing than the l2l_{2}-norm, but it doesn’t mean we can replace the l0l_{0}-norm with the l1l_{1}-norm without a valid justification.

Looking at the visualization of various q0q \ge 0 for lql_{q}-norm, it feels instantly that around q1q \le 1, sq\left\| \mathbf{s} \right\|_{q} will be similar to s0\left\| \mathbf{s} \right\|_{0}. We don’t necessarily need to consider all 0q10 \le q \le 1; narrowing down the difference between l0l_{0}-norm and l1l_{1}-norm seems to make solving the Compressed Sensing problem much easier.

Actually, it has been proven that the Compressed Sensing problem can be solved more easily when Θ\Theta follows the Restricted Isometry Property, establishing its unique position and being widely applied in various fields as of 2023.


  1. Needell, D., Vershynin, R. Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit. Found Comput Math 9, 317–334 (2009). https://doi.org/10.1007/s10208-008-9031-3 ↩︎

  2. Brunton. (2022). Data-Driven Science and Engineering : Machine Learning, Dynamical Systems, and Control(2nd Edition): p110. ↩︎