Moreover, the discrete cosine transform of \(x\), shown in the lower plot, closely resembles the original signal. We see that it has relatively few large components and that it closely resembles the DCT of the original signal. The upper plot in Figure 2 shows the resulting solution, \(x\). The key to the almost magical reconstruction process is to impose a nonlinear regularization involving the \(\ell_\)- magic, written by Justin Romberg and Emmanuel Candès when they were at Caltech. In this situation, there are many more unknowns than equations. Computing the coefficients \(x\) involves solving an underdetermined system of simultaneous linear equations, \(Ax = b\). Since this is a compression, \(A\) is rectangular, with many more columns than rows. Once we have the coefficients, we can recover the signal itself by computing To reconstruct the signal, we must try to recover the coefficients by solving But more complicated sampling operators are possible. In our example, \(b\) is a few random samples of \(f\), so \(\Phi\) is a subset of the rows of the identity operator. Sampling the signal involves another linear operator, We also assume that most of the coefficients \(c\) are effectively zero, so that \(c\) is sparse. In our example, \(\Psi\) is the discrete cosine transform. The basis functions must be suited to a particular application. We assume that \(f\) can be represented as a linear combination of certain basis functions: A raw signal or image can be regarded as a vector \(f\) with millions of components. Naturally, the aspect of compressed sensing that first caught my attention was the underlying matrix computation.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |