Sampling in the Age of Sparsity
Recorded 12 May 2011 in Lausanne, Vaud, Switzerland
Event: KTN - Know Thy Neighbor
Sampling is a central topic not just in signal processing and communications, but in all fields where the world is analog, but computation is digital. This includes sensing, simulating, and rendering the real world, estimating parameters, or using analog channels.
The question of sampling is very simple: when is there a one-to-one relationship between a continuous-time function and adequately acquired samples of this function? Sampling has a rich history, dating back to Whittaker, Nyquist, Kotelnikov, Shannon and others, and is an active area of contemporary research with fascinating new results. Classic results are on bandlimited functions, where taking measurements at the Nyquist rate is sufficient for perfect reconstruction. These results were extended to shift-invariant and multiscale spaces during the development of wavelets. All these methods are based on subspace structures, and on linear approximation. Irregular sampling, with known sampling times, relies of the theory of frames.
These classic result can be used to derive sampling theorems related to PDE's, to mobile sensing and as well as to sampling based on timing information. Recently, non-linear sampling methods have appeared. Non-linear approximation in wavelet spaces is powerful for approximation and compression. This indicates that functions that are sparse in a basis (but not necessarily on a fixed subspace) can be represented efficiently. The idea is even more general than sparsity in a basis, as pointed out in the framework of signals with finite rate of innovation. Such signals are non-bandlimited continuous-time signals, but with a parametric representation having a finite number of degrees of freedom per unit of time. This leads to sharp results on sampling and reconstruction of such sparse continuous-time signals, leading to sampling at Occam's rate.
Among non-linear methods, compressed sensing and compressive sampling, have generated a lot of attention. This is a discrete time, finite dimensional set up, with strong results on possible recovery by relaxing the l_0 into l_1 optimization, or using greedy algorithms. These methods have the advantage of unstructured measurement matrices (actually, typically random ones) and therefore a certain universality, at the cost of some redundancy. We compare the two approaches, highlighting differences, similarities, and respective advantages.
We finish by looking at selected applications in practical signal processing and communication problems. These cover wideband communications, noise removal, distributed sampling, and super-resolution imaging, to name a few. In particular, we describe a recent result on multichannel sampling with unknown shifts, which leads to an efficient super-resolution imaging method.
Watched 1024 times.Watch